Website •
- About •
Installation •
- Using OpenDR toolkit •
- Examples •
+ Python Examples •
+ ROS1 •
+ ROS2 •
+ C API •
+ Customization •
+ Known Issues •
Roadmap •
Changelog •
License
@@ -33,19 +36,40 @@ OpenDR focuses on the **AI and Cognition core technology** in order to provide t
As a result, the developed OpenDR toolkit will also enable cooperative human-robot interaction as well as the development of cognitive mechatronics where sensing and actuation are closely coupled with cognitive systems thus contributing to another two core technologies beyond AI and Cognition.
OpenDR aims to develop, train, deploy and evaluate deep learning models that improve the technical capabilities of the core technologies beyond the current state of the art.
-## Installing OpenDR Toolkit
+## Where to start?
+
+You can start by [installing](docs/reference/installation.md) the OpenDR toolkit.
OpenDR can be installed in the following ways:
1. By *cloning* this repository (CPU/GPU support)
2. Using *pip* (CPU/GPU support only)
3. Using *docker* (CPU/GPU support)
-You can find detailed installation instruction in the [documentation](docs/reference/installation.md).
-## Using OpenDR toolkit
+## What OpenDR provides?
+
OpenDR provides an intuitive and easy to use **[Python interface](src/opendr)**, a **[C API](src/c_api) for performance critical application**, a wealth of **[usage examples and supporting tools](projects)**, as well as **ready-to-use [ROS nodes](projects/opendr_ws)**.
OpenDR is built to support [Webots Open Source Robot Simulator](https://cyberbotics.com/), while it also extensively follows industry standards, such as [ONNX model format](https://onnx.ai/) and [OpenAI Gym Interface](https://gym.openai.com/).
-You can find detailed documentation in OpenDR [wiki](https://github.com/tasostefas/opendr_internal/wiki), as well as in the [tools index](docs/reference/index.md).
+
+## How can I start using OpenDR?
+
+You can find detailed documentation in OpenDR [wiki](https://github.com/opendr-eu/opendr/wiki).
+The main point of reference after installing the toolkit is the [tools index](docs/reference/index.md).
+Starting from there, you can find detailed documentation for all the tools included in OpenDR.
+
+- If you are interested in ready-to-use ROS nodes, then you can directly jump to our [ROS1](projects/opendr_ws) and [ROS2](projects/opendr_ws_2) workspaces.
+- If you are interested for ready-to-use examples, then you can checkout the [projects](projects/python) folder, which contains examples and tutorials for [perception](projects/python/perception), [control](projects/python/control), [simulation](projects/python/simulation) and [hyperparameter tuning](projects/python/utils) tools.
+- If you want to explore our C API, then you explore the provided [C demos](projects/c_api).
+
+## How can I interface OpenDR?
+
+OpenDR is built upon Python.
+Therefore, the main OpenDR interface is written in Python and it is available through the [opendr](src/opendr) package.
+Furthermore, OpenDR provides [ROS1](projects/opendr_ws) and [ROS2](projects/opendr_ws_2) interfaces, as well as a [C interface](projects/c_api).
+Note that you can use as many tools as you wish at the same time, since there is no hardware limitation on the number of tools that can run at the same time.
+However, hardware limitations (e.g., GPU memory) might restrict the number of tools that can run at any given moment.
+
+
## Roadmap
OpenDR has the following roadmap:
@@ -54,15 +78,15 @@ OpenDR has the following roadmap:
- **v3.0 (2023)**: Active perception-enabled deep learning tools for improved robotic perception
## How to contribute
-Please follow the instructions provided in the [wiki](https://github.com/tasostefas/opendr_internal/wiki).
+Please follow the instructions provided in the [wiki](https://github.com/opendr-eu/opendr/wiki).
## How to cite us
If you use OpenDR for your research, please cite the following paper that introduces OpenDR architecture and design:
-@article{opendr2022,
+@inproceedings{opendr2022,
title={OpenDR: An Open Toolkit for Enabling High Performance, Low Footprint Deep Learning for Robotics},
author={Passalis, Nikolaos and Pedrazzi, Stefania and Babuska, Robert and Burgard, Wolfram and Dias, Daniel and Ferro, Francesco and Gabbouj, Moncef and Green, Ole and Iosifidis, Alexandros and Kayacan, Erdal and Kober, Jens and Michel, Olivier and Nikolaidis, Nikos and Nousi, Paraskevi and Pieters, Roel and Tzelepi, Maria and Valada, Abhinav and Tefas, Anastasios},
- journal={arXiv preprint arXiv:2203.00403},
+ booktitle = {Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (to appear)},
year={2022}
}
diff --git a/bin/activate_nvidia.sh b/bin/activate_nvidia.sh
new file mode 100755
index 0000000000..15df6b6870
--- /dev/null
+++ b/bin/activate_nvidia.sh
@@ -0,0 +1,12 @@
+#!/bin/sh
+export OPENDR_HOME=$PWD
+export PYTHONPATH=$OPENDR_HOME/src:$PYTHONPATH
+alias python=python3
+export LD_LIBRARY_PATH=$OPENDR_HOME/lib:$LD_LIBRARY_PATH
+
+export PATH=/usr/local/cuda/bin:$PATH
+export MXNET_HOME=$OPENDR_HOME/mxnet/
+export PYTHONPATH=$MXNET_HOME/python:$PYTHONPATH
+export MXNET_CUDNN_AUTOTUNE_DEFAULT=0
+export LC_ALL="C.UTF-8"
+export MPLBACKEND=TkAgg
diff --git a/bin/build_wheel.sh b/bin/build_wheel.sh
index acfa33680c..42a184d4c2 100755
--- a/bin/build_wheel.sh
+++ b/bin/build_wheel.sh
@@ -7,7 +7,7 @@ git submodule update --init --recursive
rm dist/*
rm src/*egg-info -rf
-pip install cython numpy
+python3 -m pip install cython numpy
# Build OpenDR packages
while read p; do
@@ -17,5 +17,7 @@ while read p; do
python3 setup.py sdist
done < packages.txt
+# Cleanup
+rm src/*egg-info -rf
rm setup.py
rm MANIFEST.in
diff --git a/bin/install.sh b/bin/install.sh
index d6a75fe65a..ddced59961 100755
--- a/bin/install.sh
+++ b/bin/install.sh
@@ -9,6 +9,15 @@ if [[ -z "${OPENDR_DEVICE}" ]]; then
export OPENDR_DEVICE=cpu
fi
+if [[ -z "${ROS_DISTRO}" ]]; then
+ echo "[INFO] No ROS_DISTRO is specified. The modules relying on ROS/ROS2 might not work."
+else
+ if ! ([[ ${ROS_DISTRO} == "noetic" || ${ROS_DISTRO} == "melodic" || ${ROS_DISTRO} == "foxy" || ${ROS_DISTRO} == "humble" ]]); then
+ echo "[ERROR] ${ROS_DISTRO} is not a supported ROS_DISTRO. Please use 'noetic' or 'melodic' for ROS and 'foxy' or 'humble' for ROS2."
+ exit 1
+ fi
+fi
+
# Install base ubuntu deps
sudo apt-get install --yes libfreetype6-dev lsb-release git python3-pip curl wget python3.8-venv
@@ -16,42 +25,47 @@ sudo apt-get install --yes libfreetype6-dev lsb-release git python3-pip curl wge
git submodule init
git submodule update
-case $(lsb_release -r |cut -f2) in
- "18.04")
- export ROS_DISTRO=melodic;;
- "20.04")
- export ROS_DISTRO=noetic;;
- *)
- echo "Not tested for this ubuntu version" && exit 1;;
-esac
-
# Create a virtual environment and update
python3 -m venv venv
source venv/bin/activate
python3 -m pip install -U pip
-pip3 install setuptools configparser
+python3 -m pip install setuptools configparser
# Add repositories for ROS
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' \
- && curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
+ && curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
# Build OpenDR
make install_compilation_dependencies
make install_runtime_dependencies
-# Install additional ROS packages
-sudo apt-get install ros-noetic-vision-msgs ros-noetic-audio-common-msgs
+# ROS package dependencies
+if [[ ${ROS_DISTRO} == "noetic" || ${ROS_DISTRO} == "melodic" ]]; then
+ echo "Installing ROS dependencies"
+ sudo apt-get -y install ros-$ROS_DISTRO-vision-msgs ros-$ROS_DISTRO-geometry-msgs ros-$ROS_DISTRO-sensor-msgs ros-$ROS_DISTRO-audio-common-msgs ros-$ROS_DISTRO-usb-cam ros-$ROS_DISTRO-webots-ros
+fi
+
+# ROS2 package dependencies
+if [[ ${ROS_DISTRO} == "foxy" || ${ROS_DISTRO} == "humble" ]]; then
+ echo "Installing ROS2 dependencies"
+ sudo apt-get -y install python3-lark ros-$ROS_DISTRO-usb-cam ros-$ROS_DISTRO-webots-ros2 python3-colcon-common-extensions ros-$ROS_DISTRO-vision-msgs
+ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ros/$ROS_DISTRO/lib/controller
+ cd $OPENDR_HOME/projects/opendr_ws_2/
+ git clone --depth 1 --branch ros2 https://github.com/ros-drivers/audio_common src/audio_common
+ rosdep install -i --from-path src/audio_common --rosdistro $ROS_DISTRO -y
+ cd $OPENDR_HOME
+fi
# If working on GPU install GPU dependencies as needed
if [[ "${OPENDR_DEVICE}" == "gpu" ]]; then
- pip3 uninstall -y mxnet
- pip3 uninstall -y torch
+ python3 -m pip uninstall -y mxnet
+ python3 -m pip uninstall -y torch
echo "[INFO] Replacing mxnet-cu112==1.8.0post0 to enable CUDA acceleration."
- pip3 install mxnet-cu112==1.8.0post0
+ python3 -m pip install mxnet-cu112==1.8.0post0
echo "[INFO] Replacing torch==1.9.0+cu111 to enable CUDA acceleration."
- pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
+ python3 -m pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
echo "[INFO] Reinstalling detectronv2."
- pip3 install 'git+https://github.com/facebookresearch/detectron2.git@5aeb252b194b93dc2879b4ac34bc51a31b5aee13'
+ python3 -m pip install 'git+https://github.com/facebookresearch/detectron2.git@5aeb252b194b93dc2879b4ac34bc51a31b5aee13'
fi
make libopendr
diff --git a/bin/install_nvidia.sh b/bin/install_nvidia.sh
new file mode 100755
index 0000000000..f0f2901b05
--- /dev/null
+++ b/bin/install_nvidia.sh
@@ -0,0 +1,310 @@
+#!/bin/bash
+
+if [[ $1 = "tx2" ]];
+then
+ echo "Installing OpenDR on Nvidia TX2"
+elif [[ $1 = "agx" ]] || [[ $1 = "nx" ]]
+then
+ echo "Installing OpenDR on Nvidia AGX/NX"
+else
+ echo "Wrong argument, supported inputs are 'tx2', 'agx' and 'nx'"
+ exit 1
+fi
+
+# export OpenDR related paths
+export OPENDR_HOME=$PWD
+export PYTHONPATH=$OPENDR_HOME/src:$PYTHONPATH
+export PYTHON=python3
+export LD_LIBRARY_PATH=$OPENDR_HOME/src:$LD_LIBRARY_PATH
+
+# Install mxnet
+cd $OPENDR_HOME
+
+sudo apt-get install -y gfortran build-essential git python3-pip python-numpy libopencv-dev graphviz libopenblas-dev libopenblas-base libatlas-base-dev python-numpy
+
+pip3 install --upgrade pip
+pip3 install setuptools==59.5.0
+pip3 install numpy==1.19.4
+
+git clone --recursive -b v1.8.x https://github.com/apache/incubator-mxnet.git mxnet
+
+export PATH=/usr/local/cuda/bin:$PATH
+export MXNET_HOME=$OPENDR_HOME/mxnet/
+export PYTHONPATH=$MXNET_HOME/python:$PYTHONPATH
+
+sudo rm /usr/local/cuda
+sudo ln -s /usr/local/cuda-10.2 /usr/local/cuda
+
+cd $MXNET_HOME
+cp $MXNET_HOME/make/config_jetson.mk config.mk
+sed -i 's/USE_CUDA = 0/USE_CUDA = 1/' config.mk
+sed -i 's/USE_CUDA_PATH = NONE/USE_CUDA_PATH = \/usr\/local\/cuda/' config.mk
+# CUDA_ARCH setting
+sed -i 's/CUDA_ARCH = -gencode arch=compute_53,code=sm_53 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_72,code=sm_72/ /' config.mk
+sed -i 's/USE_CUDNN = 0/USE_CUDNN = 1/' config.mk
+
+if [[ $1 = "tx2" ]];
+then
+ sed -i '/USE_CUDNN/a CUDA_ARCH = -gencode arch=compute_62,code=sm_62' config.mk
+elif [[ $1 = "agx" ]] || [[ $1 = "nx" ]]
+then
+ echo "AGX or nx"
+ sed -i '/USE_CUDNN/a CUDA_ARCH = -gencode arch=compute_72,code=sm_72' config.mk
+else
+ echo "Wrong argument, supported inputs are 'tx2', 'agx' and 'nx'"
+fi
+
+make -j $(nproc) NVCC=/usr/local/cuda/bin/nvcc
+
+cd $MXNET_HOME/python
+sudo pip3 install -e .
+
+cd $OPENDR_HOME
+chmod a+rwx ./mxnet
+
+sudo apt-get update
+sudo apt-get install --yes libfreetype6-dev lsb-release curl wget
+
+git submodule init
+git submodule update
+
+pip3 install configparser
+
+# Install Torch
+sudo apt-get install --yes libopenblas-dev cmake ninja-build
+TORCH=torch-1.9.0-cp36-cp36m-linux_aarch64.whl
+wget https://nvidia.box.com/shared/static/h1z9sw4bb1ybi0rm3tu8qdj8hs05ljbm.whl -O torch-1.9.0-cp36-cp36m-linux_aarch64.whl
+
+pip3 install Cython
+pip3 install $TORCH
+rm ./torch-1.9.0-cp36-cp36m-linux_aarch64.whl
+
+# Install Torchvision
+TORCH_VISION=0.10.0
+sudo apt-get install --yes libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
+git clone --branch v0.10.0 https://github.com/pytorch/vision torchvision
+cd torchvision
+export BUILD_VERSION=0.10.0
+sudo python3 setup.py install
+cd ../
+rm -r torchvision/
+
+# Install dlib
+wget http://dlib.net/files/dlib-19.21.tar.bz2
+tar jxvf dlib-19.21.tar.bz2
+cd dlib-19.21/
+mkdir build
+cd build/
+cmake ..
+cmake --build .
+cd ../
+sudo python3 setup.py install
+cd $OPENDR_HOME
+rm dlib-19.21.tar.bz2
+
+apt-get install -y libprotobuf-dev protobuf-compiler
+apt-get install -y python3-tk
+# For AV
+apt-get update && apt-get install -y software-properties-common &&\
+ add-apt-repository -y ppa:jonathonf/ffmpeg-4
+
+apt-get update && apt-get install -y \
+ ffmpeg \
+ libavformat-dev \
+ libavcodec-dev \
+ libavdevice-dev \
+ libavutil-dev \
+ libswscale-dev \
+ libswresample-dev \
+ libavfilter-dev \
+ libeigen3-dev
+
+pip3 install av==8.0.1
+
+# Install rest of the dependencies of OpenDR
+
+pip3 install absl-py==1.0.0
+pip3 install aiohttp==3.8.1
+pip3 install aiosignal==1.2.0
+pip3 install alembic==1.7.5
+pip3 install appdirs==1.4.4
+pip3 install async-timeout==4.0.1
+pip3 install attrs==21.2.0
+pip3 install audioread==2.1.9
+pip3 install autocfg==0.0.8
+pip3 install Automat==20.2.0
+pip3 install autopage==0.4.0
+pip3 install bcolz==1.2.1
+pip3 cache purge
+pip3 install scikit-build==0.16.3
+pip3 install cachetools==4.2.4
+pip3 install catkin-pkg==0.4.24
+pip3 install catkin-tools==0.8.2
+pip3 install certifi==2021.10.8
+pip3 install cityscapesscripts==2.2.0
+pip3 install charset-normalizer==2.0.9
+pip3 install cliff==3.10.0
+pip3 install cloudpickle==1.5.0
+pip3 install cmaes==0.8.2
+pip3 install cmd2==2.3.3
+pip3 install colorlog==6.6.0
+pip3 install configparser==5.2.0
+pip3 install constantly==15.1.0
+pip3 install cycler==0.11.0
+pip3 install Cython==0.29.22
+pip3 install cython-bbox==0.1.3
+pip3 install decorator==5.1.0
+pip3 install defusedxml==0.7.1
+pip3 install distro==1.6.0
+pip3 install docutils==0.18.1
+pip3 install easydict==1.9
+pip3 install empy==3.3.4
+pip3 install filterpy==1.4.5
+pip3 install flake8==4.0.1
+pip3 install flake8-import-order==0.18.1
+pip3 install flask
+pip3 cache purge
+pip3 install frozenlist==1.2.0
+pip3 install fsspec==2021.11.1
+pip3 install future==0.18.2
+pip3 install gdown
+pip3 install gluoncv==0.11.0b20210908
+pip3 install google-auth==1.35.0
+pip3 install google-auth-oauthlib==0.4.6
+pip3 install graphviz==0.8.4
+pip3 install greenlet==1.1.2
+pip3 install grpcio==1.42.0
+pip3 install gym==0.21.0
+pip3 install hyperlink==21.0.0
+pip3 install idna==3.3
+pip3 install idna-ssl==1.1.0
+pip3 install imageio==2.6.0
+pip3 install imantics==0.1.12
+pip3 install imgaug==0.4.0
+pip3 install importlib-metadata==4.8.2
+pip3 install importlib-resources==5.4.0
+pip3 install imutils==0.5.4
+pip3 install incremental==21.3.0
+pip3 install iniconfig==1.1.1
+pip3 install ipython
+pip3 install joblib==1.0.1
+pip3 install kiwisolver==1.3.1
+pip3 install lap==0.4.0
+pip3 cache purge
+sudo apt-get install --yes llvm-10*
+sudo ln -s /usr/bin/llvm-config-10 /usr/bin/llvm-config
+pip3 install llvmlite==0.36.0
+sudo mv /usr/include/tbb/tbb.h /usr/include/tbb/tbb.h.bak
+pip3 install numba==0.53.1
+LLVM_CONFIG=/usr/bin/llvm-config-10 pip3 install librosa==0.8.0
+pip3 install lxml==4.6.3
+pip3 install Mako==1.1.6
+pip3 install Markdown==3.3.6
+pip3 install MarkupSafe==2.0.1
+pip3 install matplotlib==2.2.2
+pip3 install mccabe==0.6.1
+pip3 install mmcv==0.5.9
+pip3 install motmetrics==1.2.0
+pip3 install multidict==5.2.0
+pip3 install munkres==1.1.4
+pip3 install netifaces==0.11.0
+pip3 install networkx==2.5.1
+pip3 install numpy==1.19.4
+pip3 install oauthlib==3.1.1
+pip3 install onnx==1.10.2
+pip3 install onnxruntime==1.3.0
+pip3 install opencv-python==4.5.4.60
+pip3 install opencv-contrib-python==4.5.4.60
+pip3 cache purge
+pip3 install optuna==2.10.0
+pip3 install osrf-pycommon==1.0.0
+pip3 install packaging==21.3
+pip3 install pandas==1.1.5
+pip3 install pbr==5.8.0
+pip3 install Pillow==8.3.2
+pip3 install plotly==5.4.0
+pip3 install pluggy==1.0.0
+pip3 install pooch==1.5.2
+pip3 install portalocker==2.3.2
+pip3 install prettytable==2.4.0
+pip3 install progress==1.5
+pip3 install protobuf==3.19.6
+pip3 install py==1.11.0
+pip3 install py-cpuinfo==8.0.0
+pip3 install pyasn1==0.4.8
+pip3 install pyasn1-modules==0.2.8
+pip3 install pybind11==2.6.2
+pip3 install pycodestyle==2.8.0
+pip3 install pycparser==2.21
+pip3 install pyflakes==2.4.0
+pip3 install pyglet==1.5.16
+pip3 install pyparsing==3.0.6
+pip3 install pyperclip==1.8.2
+pip3 install pytest==6.2.5
+pip3 install pytest-benchmark==3.4.1
+pip3 install python-dateutil==2.8.2
+pip3 cache purge
+pip3 install pytz==2021.3
+pip3 install PyWavelets==1.1.1
+pip3 install --ignore-installed PyYAML==5.3
+pip3 install requests==2.26.0
+pip3 install requests-oauthlib==1.3.0
+pip3 install resampy==0.2.2
+pip3 install rosdep==0.21.0
+pip3 install rosdistro==0.8.3
+pip3 install roslibpy==1.2.1
+pip3 install rospkg==1.3.0
+pip3 install rsa==4.8
+pip3 install scikit-image==0.16.2
+pip3 install scikit-learn==0.22
+pip3 install seaborn==0.11.2
+pip3 install setuptools-rust==1.1.2
+pip3 install scipy==1.5.4
+pip3 install Shapely==1.5.9
+pip3 install six==1.16.0
+pip3 install SoundFile==0.10.3.post1
+pip3 install SQLAlchemy==1.4.28
+pip3 install stable-baselines3==1.1.0
+pip3 install stevedore==3.5.0
+pip3 install tabulate==0.8.9
+pip3 install tenacity==8.0.1
+pip3 install tensorboard==2.4.1
+pip3 install tensorboard-plugin-wit==1.8.0
+pip3 install tensorboardX==2.0
+pip3 cache purge
+pip3 install toml==0.10.2
+pip3 install tqdm==4.54.0
+pip3 install trimesh==3.5.23
+pip3 install Twisted==21.7.0
+pip3 install txaio==21.2.1
+pip3 install typing_extensions==4.0.1
+pip3 install urllib3==1.26.7
+pip3 install vcstool==0.3.0
+pip3 install wdwidth==0.2.5
+pip3 install Werkzeug==2.0.2
+pip3 install xmljson==0.2.1
+pip3 install xmltodict==0.12.0
+pip3 install yacs==0.1.8
+pip3 install yarl==1.7.2
+pip3 install zipp==3.6.0
+pip3 install zope.interface==5.4.0
+pip3 install wheel
+pip3 install pytorch-lightning==1.2.3
+pip3 install omegaconf==2.3.0
+pip3 install ninja
+pip3 install terminaltables
+pip3 install psutil
+pip3 install continual-inference>=1.0.2
+pip3 install git+https://github.com/waspinator/pycococreator.git@0.2.0
+pip3 install git+https://github.com/cidl-auth/cocoapi@03ee5a19844e253b8365dbbf35c1e5d8ca2e7281#subdirectory=PythonAPI
+pip3 install git+https://github.com/cocodataset/panopticapi.git@7bb4655548f98f3fedc07bf37e9040a992b054b0
+pip3 install git+https://github.com/mapillary/inplace_abn.git
+pip3 install git+https://github.com/facebookresearch/detectron2.git@4841e70ee48da72c32304f9ebf98138c2a70048d
+pip3 install git+https://github.com/cidl-auth/DCNv2
+pip3 install ${OPENDR_HOME}/src/opendr/perception/panoptic_segmentation/efficient_ps/algorithm/EfficientPS
+pip3 install ${OPENDR_HOME}/src/opendr/perception/panoptic_segmentation/efficient_ps/algorithm/EfficientPS/efficientNet
+pip3 cache purge
+
+cd $OPENDR_HOME/src/opendr/perception/object_detection_2d/retinaface
+make
+cd $OPENDR_HOME
diff --git a/dependencies/parse_dependencies.py b/dependencies/parse_dependencies.py
index 31fdc20829..608abdcd51 100644
--- a/dependencies/parse_dependencies.py
+++ b/dependencies/parse_dependencies.py
@@ -65,7 +65,7 @@ def read_ini_key(key, summary_file):
# Loop through tools and extract dependencies
if not global_dependencies:
opendr_home = os.environ.get('OPENDR_HOME')
- for dir_to_walk in ['src', 'projects/control/eagerx']:
+ for dir_to_walk in ['src', 'projects/python/control/eagerx']:
for subdir, dirs, files in os.walk(os.path.join(opendr_home, dir_to_walk)):
for filename in files:
if filename == 'dependencies.ini':
diff --git a/dependencies/pip_requirements.txt b/dependencies/pip_requirements.txt
deleted file mode 100644
index 76a12feebd..0000000000
--- a/dependencies/pip_requirements.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-numpy==1.17.5
-Cython
-torch==1.7.1
-wheel
-git+https://github.com/cidl-auth/cocoapi@03ee5a19844e253b8365dbbf35c1e5d8ca2e7281#subdirectory=PythonAPI
-git+https://github.com/cocodataset/panopticapi.git@7bb4655548f98f3fedc07bf37e9040a992b054b0
-git+https://github.com/MatthewHowe/DCNv2@194f5733c667cf13e5bd478a8c5bf27573ffa98c
\ No newline at end of file
diff --git a/docs/reference/activity-recognition.md b/docs/reference/activity-recognition.md
index 733ba2207e..22214a87dc 100644
--- a/docs/reference/activity-recognition.md
+++ b/docs/reference/activity-recognition.md
@@ -2,6 +2,7 @@
The *activity_recognition* module contains the *X3DLearner* and *CoX3DLearner* classes, which inherit from the abstract class *Learner*.
+You can find the classes and the corresponding IDs regarding activity recognition [here](https://github.com/opendr-eu/opendr/blob/master/src/opendr/perception/activity_recognition/datasets/kinetics400_classes.csv).
### Class X3DLearner
Bases: `engine.learners.Learner`
@@ -146,7 +147,6 @@ Parameters:
Path to metadata file in json format or to weights path.
-
#### `X3DLearner.optimize`
```python
X3DLearner.optimize(self, do_constant_folding)
@@ -215,8 +215,6 @@ Parameters:
```
-
-
#### References
[1] X3D: Expanding Architectures for Efficient Video Recognition,
[arXiv](https://arxiv.org/abs/2004.04730).
@@ -398,7 +396,6 @@ Inherited from [X3DLearner](/src/opendr/perception/activity_recognition/x3d/x3d_
```
-
#### Performance Evaluation
TABLE-1: Input shapes, prediction accuracy on Kinetics 400, floating point operations (FLOPs), parameter count and maximum allocated memory of activity recognition learners at inference.
@@ -426,7 +423,7 @@ TABLE-2: Speed (evaluations/second) of activity recognition learner inference on
TABLE-3: Throughput (evaluations/second) of activity recognition learner inference on various computational devices.
-The largest fitting power of two was used as batch size for each device.
+The largest fitting power of two was used as batch size for each device.
| Model | CPU | TX2 | Xavier | RTX 2080 Ti |
| ------- | ----- | ---- | ------ | ----------- |
| X3D-L | 0.22 | 0.21 | 1.73 | 3.55 |
@@ -438,7 +435,7 @@ The largest fitting power of two was used as batch size for each device.
| CoX3D-S | 11.60 | 8.22 | 64.91 | 196.54 |
-TABLE-4: Energy (Joules) of activity recognition learner inference on embedded devices.
+TABLE-4: Energy (Joules) of activity recognition learner inference on embedded devices.
| Model | TX2 | Xavier |
| ------- | ------ | ------ |
| X3D-L | 187.89 | 23.54 |
@@ -468,5 +465,6 @@ Model inference works as expected.
#### References
-[1] X3D: Expanding Architectures for Efficient Video Recognition,
+[2] X3D: Expanding Architectures for Efficient Video Recognition,
[arXiv](https://arxiv.org/abs/2004.04730).
+
diff --git a/docs/reference/ambiguity_measure.md b/docs/reference/ambiguity_measure.md
new file mode 100644
index 0000000000..3b816628e6
--- /dev/null
+++ b/docs/reference/ambiguity_measure.md
@@ -0,0 +1,76 @@
+## ambiguity_measure module
+
+The *ambiguity_measure* module contains the *AmbiguityMeasure* class.
+
+### Class AmbiguityMeasure
+Bases: `object`
+
+The *AmbiguityMeasure* class is a tool that allows to obtain an ambiguity measure of vision-based models that output pixel-wise value estimates.
+This tool can be used in combination with vision-based manipulation models such as Transporter Nets [[1]](#transporter-paper).
+
+The [AmbiguityMeasure](../../src/opendr/utils/ambiguity_measure/ambiguity_measure.py) class has the following public methods:
+
+#### `AmbiguityMeasure` constructor
+```python
+AmbiguityMeasure(self, threshold, temperature)
+```
+
+Constructor parameters:
+
+- **threshold**: *float, default=0.5*\
+ Ambiguity threshold, should be in [0, 1).
+- **temperature**: *float, default=1.0*\
+ Temperature of the sigmoid function.
+ Should be > 0.
+ Higher temperatures will result in higher ambiguity measures.
+
+#### `AmbiguityMeasure.get_ambiguity_measure`
+```python
+AmbiguityMeasure.get_ambiguity_measure(self, heatmap)
+```
+
+This method allows to obtain an ambiguity measure of the output of a model.
+
+Parameters:
+
+- **heatmap**: *np.ndarray*\
+ Pixel-wise value estimates.
+ These can be obtained using from for example a Transporter Nets model [[1]](#transporter-paper).
+
+#### Demos and tutorial
+
+A demo showcasing the usage and functionality of the *AmbiguityMeasure* is available [here](https://colab.research.google.com/github/opendr-eu/opendr/blob/ambiguity_measure/projects/python/utils/ambiguity_measure/ambiguity_measure_tutorial.ipynb).
+
+
+#### Examples
+
+* **Ambiguity measure example**
+
+ This example shows how to obtain the ambiguity measure from pixel-wise value estimates.
+
+ ```python
+ import numpy as np
+ from opendr.utils.ambiguity_measure.ambiguity_measure import AmbiguityMeasure
+
+ # Simulate image and value pixel-wise value estimates (normally you would get this from a model such as Transporter)
+ img = 255 * np.random.random((128, 128, 3))
+ img = np.asarray(img, dtype="uint8")
+ heatmap = 10 * np.random.random((128, 128))
+
+ # Initialize ambiguity measure
+ am = AmbiguityMeasure(threshold=0.1, temperature=3)
+
+ # Get ambiguity measure of the heatmap
+ ambiguous, locs, maxima, probs = am.get_ambiguity_measure(heatmap)
+
+ # Plot ambiguity measure
+ am.plot_ambiguity_measure(heatmap, locs, probs, img)
+ ```
+
+#### References
+[1]
+Zeng, A., Florence, P., Tompson, J., Welker, S., Chien, J., Attarian, M., ... & Lee, J. (2021, October).
+Transporter networks: Rearranging the visual world for robotic manipulation.
+In Conference on Robot Learning (pp. 726-747).
+PMLR.
+
diff --git a/docs/reference/continual-transformer-encoder.md b/docs/reference/continual-transformer-encoder.md
new file mode 100644
index 0000000000..38ec084bae
--- /dev/null
+++ b/docs/reference/continual-transformer-encoder.md
@@ -0,0 +1,211 @@
+## Continual Transformer Encoder module
+
+
+### Class CoTransEncLearner
+Bases: `engine.learners.Learner`
+
+The *CoTransEncLearner* class provides a Continual Transformer Encoder learner, which can be used for time-series processing of user-provided features.
+This module was originally proposed by Hedegaard et al. in "Continual Transformers: Redundancy-Free Attention for Online Inference", 2022, https://arxiv.org/abs/2201.06268"
+
+The [CoTransEncLearner](src/opendr/perception/activity_recognition/continual_transformer_decoder/continual_transformer_decoder_learner.py) class has the following public methods:
+
+#### `CoTransEncLearner` constructor
+
+```python
+CoX3DLearner(self, lr, iters, batch_size, optimizer, lr_schedule, network_head, num_layers, input_dims, hidden_dims, sequence_len, num_heads, dropout, num_classes, positional_encoding_learned, checkpoint_after_iter, checkpoint_load_iter, temp_path, device, loss, weight_decay, momentum, drop_last, pin_memory, num_workers, seed)
+```
+
+Constructor parameters:
+
+ - **lr**: *float, default=1e-2*\
+ Learning rate during optimization.
+ - **iters**: *int, default=10*\
+ Number of epochs to train for.
+ - **batch_size**: *int, default=64*\
+ Dataloader batch size. Defaults to 64.
+ - **optimizer**: *str, default="sgd"*\
+ Name of optimizer to use ("sgd" or "adam").
+ - **lr_schedule**: *str, default=""*\
+ Schedule for training the model.
+ - **network_head**: *str, default="classification"*\
+ Head of network (only "classification" is currently available).
+ - **num_layers**: *int, default=1*\
+ Number of Transformer Encoder layers (1 or 2). Defaults to 1.
+ - **input_dims**: *float, default=1024*\
+ Input dimensions per token.
+ - **hidden_dims**: *float, default=1024*\
+ Hidden projection dimension.
+ - **sequence_len**: *int, default=64*\
+ Length of token sequence to consider.
+ - **num_heads**: *int, default=8*\
+ Number of attention heads.
+ - **dropout**: *float, default=0.1*\
+ Dropout probability.
+ - **num_classes**: *int, default=22*\
+ Number of classes to predict among.
+ - **positional_encoding_learned**: *bool, default=False*\
+ Positional encoding type.
+ - **checkpoint_after_iter**: *int, default=0*\
+ Unused parameter.
+ - **checkpoint_load_iter**: *int, default=0*\
+ Unused parameter.
+ - **temp_path**: *str, default=""*\
+ Path in which to store temporary files.
+ - **device**: *str, default="cuda"*\
+ Name of computational device ("cpu" or "cuda").
+ - **loss**: *str, default="cross_entropy"*\
+ Loss function used during optimization.
+ - **weight_decay**: *[type], default=1e-4*\
+ Weight decay used for optimization.
+ - **momentum**: *float, default=0.9*\
+ Momentum used for optimization.
+ - **drop_last**: *bool, default=True*\
+ Drop last data point if a batch cannot be filled.
+ - **pin_memory**: *bool, default=False*\
+ Pin memory in dataloader.
+ - **num_workers**: *int, default=0*\
+ Number of workers in dataloader.
+ - **seed**: *int, default=123*\
+ Random seed.
+
+
+#### `CoTransEncLearner.fit`
+```python
+CoTransEncLearner.fit(self, dataset, val_dataset, epochs, steps)
+```
+
+This method is used for training the algorithm on a train dataset and validating on a val dataset.
+
+Parameters:
+ - **dataset**: *Dataset*:
+ Training dataset.
+ - **val_dataset**: *Dataset, default=None*
+ Validation dataset. If none is given, validation steps are skipped.
+ - **epochs**: *int, default=None*
+ Number of epochs. If none is supplied, self.iters will be used.
+ - **steps**: *int, default=None*
+ Number of training steps to conduct. If none, this is determined by epochs.
+
+
+#### `CoTransEncLearner.eval`
+```python
+CoTransEncLearner.eval(self, dataset, steps)
+```
+This method is used to evaluate a trained model on an evaluation dataset.
+Returns a dictionary containing stats regarding evaluation.
+
+Parameters:
+ - **dataset**: *Dataset*
+ Dataset on which to evaluate model.
+ - **steps**: *int, default=None*
+ Number of validation batches to evaluate. If None, all batches are evaluated.
+
+
+#### `CoTransEncLearner.infer`
+```python
+CoTransEncLearner.infer(x)
+```
+
+This method is used to perform classification of a video.
+Returns a `engine.target.Category` objects, where each holds a category.
+
+Parameters:
+- **x**: *Union[Timeseries, Vector, torch.Tensor]*
+ Either a single time instance (Vector) or a Timeseries. x can also be passed as a torch.Tensor.
+
+
+#### `CoTransEncLearner.save`
+```python
+CoTransEncLearner.save(self, path)
+```
+
+Save model weights and metadata to path.
+Provided with the path "/my/path/name" (absolute or relative), it creates the "name" directory, if it does not already exist.
+Inside this folder, the model is saved as "model_name.pth" and the metadata file as "name.json".
+If the files already exist, their names are versioned with a suffix.
+
+If `self.optimize` was run previously, it saves the optimized ONNX model in a similar fashion with an ".onnx" extension.
+
+Parameters:
+- **path**: *str*
+ Directory in which to save model weights and meta data.
+
+
+#### `CoTransEncLearner.load`
+```python
+CoTransEncLearner.load(self, path)
+```
+
+This method is used to load a previously saved model from its saved folder.
+
+Parameters:
+- **path**: *str*
+ Path to metadata file in json format or to weights path.
+
+
+#### `CoTransEncLearner.optimize`
+```python
+CoTransEncLearner.optimize(self, do_constant_folding)
+```
+
+Optimize model execution. This is accomplished by saving to the ONNX format and loading the optimized model.
+
+Parameters:
+- **do_constant_folding**: *bool, default=False*
+ ONNX format optimization.
+ If True, the constant-folding optimization is applied to the model during export.
+ Constant-folding optimization will replace some of the ops that have all constant inputs, with pre-computed constant nodes.
+
+
+#### Examples
+
+* **Fit model**.
+
+ ```python
+ from opendr.perception.activity_recognition import CoTransEncLearner
+ from opendr.perception.activity_recognition.datasets import DummyTimeseriesDataset
+
+ learner = CoTransEncLearner(
+ batch_size=2,
+ device="cpu",
+ input_dims=8,
+ hidden_dims=32,
+ sequence_len=64,
+ num_heads=8,
+ num_classes=4,
+ )
+ train_ds = DummyTimeseriesDataset(
+ sequence_len=64, num_sines=8, num_datapoints=128
+ )
+ val_ds = DummyTimeseriesDataset(
+ sequence_len=64, num_sines=8, num_datapoints=128, base_offset=128
+ )
+ learner.fit(dataset=train_ds, val_dataset=val_ds, steps=2)
+ learner.save('./saved_models/trained_model')
+ ```
+
+* **Evaluate model**.
+
+ ```python
+ from opendr.perception.activity_recognition import CoTransEncLearner
+ from opendr.perception.activity_recognition.datasets import DummyTimeseriesDataset
+
+ learner = CoTransEncLearner(
+ batch_size=2,
+ device="cpu",
+ input_dims=8,
+ hidden_dims=32,
+ sequence_len=64,
+ num_heads=8,
+ num_classes=4,
+ )
+ test_ds = DummyTimeseriesDataset(
+ sequence_len=64, num_sines=8, num_datapoints=128, base_offset=256
+ )
+ results = learner.eval(test_ds) # Dict with accuracy and loss
+ ```
+
+
+#### References
+[3] Continual Transformers: Redundancy-Free Attention for Online Inference,
+[arXiv](https://arxiv.org/abs/2201.06268).
diff --git a/docs/reference/customize.md b/docs/reference/customize.md
new file mode 100644
index 0000000000..3af80f0656
--- /dev/null
+++ b/docs/reference/customize.md
@@ -0,0 +1,131 @@
+# Customizing the toolkit
+
+OpenDR is fully open-source and can be readily customized to meet the needs of several different application areas, since the source code for all the developed tools is provided.
+Several ready-to-use examples, which are expected to cover a wide range of different needs, are provided.
+For example, users can readily use the existing [ROS nodes](../../projects/opendr_ws), e.g., by including the required triggers or by combining several nodes into one to build custom nodes that will fit their needs.
+Furthermore, note that several tools can be combined within a ROS node, as showcased in [face recognition ROS node](../../projects/opendr_ws/src/perception/scripts/face_recognition.py).
+You can use these nodes as a template for customizing the toolkit to your own needs.
+The rest of this document includes instructions for:
+1. [Building docker images using the provided docker files](#building-custom-docker-images)
+2. [Customizing existing docker images](#customizing-existing-docker-images)
+3. [Changing the behavior of ROS nodes](#changing-the-behavior-of-ros-nodes)
+4. [Building docker images that do not contain the whole toolkit](#building-docker-images-that-do-not-contain-the-whole-toolkit)
+
+
+## Building custom docker images
+The default docker images can be too large for some applications.
+OpenDR provides the dockerfiles for customizing the images to your own needs, e.g., using OpenDR in custom third-party images.
+Therefore, you can build the docker images locally using the [Dockerfile](/Dockerfile) ([Dockerfile-cuda](/Dockerfile-cuda) for cuda) provided in the root folder of the toolkit.
+
+### Building the CPU image
+For the CPU image, execute the following commands:
+```bash
+git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
+cd opendr
+sudo docker build -t opendr/opendr-toolkit:cpu .
+```
+
+### Building the CUDA image
+For the cuda-enabled image, first edit `/etc/docker/daemon.json` in order to set the default docker runtime:
+```
+{
+ "runtimes": {
+ "nvidia": {
+ "path": "nvidia-container-runtime",
+ "runtimeArgs": []
+ }
+ },
+ "default-runtime": "nvidia"
+}
+```
+
+Restart docker afterwards:
+```
+sudo systemctl restart docker.service
+```
+Then you can build the supplied dockerfile:
+```bash
+git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
+cd opendr
+sudo docker build -t opendr/opendr-toolkit:cuda -f Dockerfile-cuda .
+```
+
+### Building the Embedded Devices image
+The provided Dockerfile-embedded is tested on fresh flashed Nvidia-nx, Nvidia-Tx2 and Nvidia-Agx using jetpack 4.6.
+
+To build the embedded devices images yourself, first edit `/etc/docker/daemon.json` in order to set the default docker runtime:
+```
+{
+ "runtimes": {
+ "nvidia": {
+ "path": "nvidia-container-runtime",
+ "runtimeArgs": []
+ }
+ },
+ "default-runtime": "nvidia"
+}
+```
+
+Restart docker afterwards:
+```
+sudo systemctl restart docker.service
+```
+
+Then run:
+```
+sudo docker build --build-arg device=nx -t opendr/opendr-toolkit:nx -f Dockerfile-embedded .
+```
+You can build the image on nx/tx2/agx by changing the build-arg accordingly.
+
+### Running the custom images
+In order to run them, the commands are respectively:
+```bash
+sudo docker run -p 8888:8888 opendr/opendr-toolkit:cpu
+```
+or:
+```
+sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda
+```
+or:
+```
+sudo docker run -p 8888:8888 opendr/opendr-toolkit:nx
+```
+## Customizing existing docker images
+Building docker images from scratch can take a lot of time, especially for embedded systems without cross-compilation support.
+If you need to modify a docker image without rebuilding it (e.g., for changing some source files inside it or adding support for custom pipelines), then you can simply start with the image that you are interesting in, make the changes and use the [docker commit](https://docs.docker.com/engine/reference/commandline/commit/) command. In this way, the changes that have been made will be saved in a new image.
+
+
+## Changing the behavior of ROS nodes
+ROS nodes are provided as examples that demonstrate how various tools can be used.
+As a result, customization might be needed in order to make them appropriate for your specific needs.
+Currently, all nodes support changing the input/output topics (please refer to the [README](../../projects/opendr_ws/src/opendr_perception/README.md) for more information for each node).
+However, if you need to change anything else (e.g., load a custom model), then you should appropriately modify the source code of the nodes.
+This is very easy, since the Python API of OpenDR is used in all of the provided nodes.
+You can refer to [Python API documentation](https://github.com/opendr-eu/opendr/blob/master/docs/reference/index.md) for more details for the tool that you are interested in.
+
+### Loading a custom model
+Loading a custom model in a ROS node is very easy.
+First, locate the node that you want to modify (e.g., [pose estimation](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py)).
+Then, search for the line where the learner loads the model (i.e., calls the `load()` function).
+For the aforementioned node, this happens at [line 76](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py#L76).
+Then, replace the path to the `load()` function with the path to your custom model.
+You can also optionally remove the call to `download()` function (e.g., [line 75](../../projects/opendr_ws/src/perception/scripts/pose_estimation.py#L75)) to make the node start up faster.
+
+
+## Building docker images that do not contain the whole toolkit
+To build custom docker images that do not contain the whole toolkit you should follow these steps:
+1. Identify the tools that are using and note them.
+2. Start from a clean clone of the repository and remove all modules under [src/opendr] that you are not using.
+To this end, use the `rm` command from the root folder of the toolkit and write down the commands that you are issuing.
+Please note that you should NOT remove the `engine` package.
+4. Add the `rm` commands that you have issued in the dockerfile (e.g., in the main [dockerfile](https://github.com/opendr-eu/opendr/blob/master/Dockerfile)) after the `WORKDIR command` and before the `RUN ./bin/install.sh` command.
+5. Build the dockerfile as usual.
+
+By removing the tools that you are not using, you are also removing the corresponding `requirements.txt` file.
+In this way, the `install.sh` script will not pull and install the corresponding dependencies, allowing for having smaller and more lightweight docker images.
+
+Things to keep in mind:
+1. ROS noetic is manually installed by the installation script.
+If you want to install another version, you should modify both `install.sh` and `Makefile`.
+2. `mxnet`, `torch` and `detectron` are manually installed by the `install.sh` script if you have set `OPENDR_DEVICE=gpu`.
+If you do not need these dependencies, then you should manually remove them.
diff --git a/docs/reference/detr.md b/docs/reference/detr.md
index d54f267ac0..b2007cb601 100644
--- a/docs/reference/detr.md
+++ b/docs/reference/detr.md
@@ -230,10 +230,10 @@ Documentation on how to use this node can be found [here](../../projects/opendr_
#### Tutorials and Demos
A tutorial on performing inference is available
-[here](../../projects/perception/object_detection_2d/detr/inference_tutorial.ipynb).
-Furthermore, demos on performing [training](../../projects/perception/object_detection_2d/detr/train_demo.py),
-[evaluation](../../projects/perception/object_detection_2d/detr/eval_demo.py) and
-[inference](../../projects/perception/object_detection_2d/detr/inference_demo.py) are also available.
+[here](../../projects/python/perception/object_detection_2d/detr/inference_tutorial.ipynb).
+Furthermore, demos on performing [training](../../projects/python/perception/object_detection_2d/detr/train_demo.py),
+[evaluation](../../projects/python/perception/object_detection_2d/detr/eval_demo.py) and
+[inference](../../projects/python/perception/object_detection_2d/detr/inference_demo.py) are also available.
diff --git a/docs/reference/eagerx.md b/docs/reference/eagerx.md
index 53f3eae930..537e128c3d 100644
--- a/docs/reference/eagerx.md
+++ b/docs/reference/eagerx.md
@@ -24,21 +24,21 @@ Documentation is available online: [https://eagerx.readthedocs.io](https://eager
**Prerequisites**: EAGERx requires ROS Noetic and Python 3.8 to be installed.
-1. **[demo_full_state](../../projects/control/eagerx/demos/demo_full_state.py)**:
+1. **[demo_full_state](../../projects/python/control/eagerx/demos/demo_full_state.py)**:
Here, we wrap the OpenAI gym within EAGERx.
The agent learns to map low-dimensional angular observations to torques.
-2. **[demo_pid](../../projects/control/eagerx/demos/demo_pid.py)**:
+2. **[demo_pid](../../projects/python/control/eagerx/demos/demo_pid.py)**:
Here, we add a PID controller, tuned to stabilize the pendulum in the upright position, as a pre-processing node.
The agent now maps low-dimensional angular observations to reference torques.
In turn, the reference torques are converted to torques by the PID controller, and applied to the system.
-3. **[demo_classifier](../../projects/control/eagerx/demos/demo_classifier.py)**:
+3. **[demo_classifier](../../projects/python/control/eagerx/demos/demo_classifier.py)**:
Instead of using low-dimensional angular observations, the environment now produces pixel images of the pendulum.
In order to speed-up learning, we use a pre-trained classifier to convert these pixel images to estimated angular observations.
Then, the agent uses these estimated angular observations similarly as in 'demo_2_pid' to successfully swing-up the pendulum.
Example usage:
```bash
-cd $OPENDR_HOME/projects/control/eagerx/demos
+cd $OPENDR_HOME/projects/python/control/eagerx/demos
python3 [demo_name]
```
diff --git a/docs/reference/end-to-end-planning.md b/docs/reference/end-to-end-planning.md
index 79ff34f7a0..748ecdde91 100644
--- a/docs/reference/end-to-end-planning.md
+++ b/docs/reference/end-to-end-planning.md
@@ -7,22 +7,22 @@ class *LearnerRL*.
Bases: `engine.learners.LearnerRL`
The *EndToEndPlanningRLLearner* is an agent that can be used to train quadrotor robots equipped with a depth sensor to
-follow a provided trajectory while avoiding obstacles.
+follow a provided trajectory while avoiding obstacles. Originally published in [[1]](#safe-e2e-planning),
-The [EndToEndPlanningRLLearner](/src/opendr/planning/end_to_end_planning/e2e_planning_learner.py) class has the
+The [EndToEndPlanningRLLearner](../../src/opendr/planning/end_to_end_planning/e2e_planning_learner.py) class has the
following public methods:
#### `EndToEndPlanningRLLearner` constructor
Constructor parameters:
-- **env**: *gym.Env*\
- Reinforcment learning environment to train or evaluate the agent on.
+- **env**: *gym.Env, default=None*\
+ Reinforcement learning environment to train or evaluate the agent on.
- **lr**: *float, default=3e-4*\
Specifies the initial learning rate to be used during training.
- **n_steps**: *int, default=1024*\
Specifies the number of steps to run for environment per update.
-- **iters**: *int, default=5e4*\
+- **iters**: *int, default=1e5*\
Specifies the number of steps the training should run for.
- **batch_size**: *int, default=64*\
Specifies the batch size during training.
@@ -35,7 +35,7 @@ Constructor parameters:
#### `EndToEndPlanningRLLearner.fit`
```python
-EndToEndPlanningRLLearner.fit(self, env, logging_path, silent, verbose)
+EndToEndPlanningRLLearner.fit(self, env, logging_path, verbose)
```
Train the agent on the environment.
@@ -46,8 +46,6 @@ Parameters:
If specified use this env to train.
- **logging_path**: *str, default=''*\
Path for logging and checkpointing.
-- **silent**: *bool, default=False*\
- Disable verbosity.
- **verbose**: *bool, default=True*\
Enable verbosity.
@@ -103,17 +101,20 @@ Parameters:
### Simulation environment setup
-The environment includes an Ardupilot controlled quadrotor in Webots simulation.
+The environment is provided with a [world](../../src/opendr/planning/end_to_end_planning/envs/webots/worlds/train-no-dynamic-random-obstacles.wbt)
+that needs to be opened with Webots version 2022b in order to demonstrate the end-to-end planner.
+
+The environment includes an optional Ardupilot controlled quadrotor for simulating dynamics.
For the installation of Ardupilot instructions are available [here](https://github.com/ArduPilot/ardupilot).
-The required files to complete Ardupilot setup can be downloaded by running [`download_ardupilot_files.py`](src/opendr/planning/end_to_end_planning/download_ardupilot_files.py) script.
+The required files to complete Ardupilot setup can be downloaded by running [download_ardupilot_files.py](../../src/opendr/planning/end_to_end_planning/download_ardupilot_files.py) script.
The downloaded files (zipped as `ardupilot.zip`) should be replaced under the installation of Ardupilot.
In order to run Ardupilot in Webots 2021a, controller codes should be replaced. (For older versions of Webots, these files can be skipped.)
The world file for the environment is provided under `/ardupilot/libraries/SITL/examples/webots/worlds/` for training and testing.
Install `mavros` package for ROS communication with Ardupilot.
Instructions are available [here](https://github.com/mavlink/mavros/blob/master/mavros/README.md#installation).
-Source installation is recomended.
+Source installation is recommended.
### Running the environment
@@ -128,16 +129,16 @@ The simulation time should stop at first time step and wait for Ardupilot softwa
- `take_off` which takes off the quadrotor.
- `range_image` which converts the depth image into array format to be input for the learner.
-After these steps the [AgiEnv](src/opendr/planning/end_to_end_planning/envs/agi_env.py) gym environment can send action comments to the simulated drone and receive depth image and pose information from simulation.
+After these steps the [UAVDepthPlanningEnv](../../src/opendr/planning/end_to_end_planning/envs/UAV_depth_planning_env.py) gym environment can send action comments to the simulated drone and receive depth image and pose information from simulation.
### Examples
Training in Webots environment:
```python
-from opendr.planning.end_to_end_planning import EndToEndPlanningRLLearner, AgiEnv
+from opendr.planning.end_to_end_planning import EndToEndPlanningRLLearner, UAVDepthPlanningEnv
-env = AgiEnv()
+env = UAVDepthPlanningEnv()
learner = EndToEndPlanningRLLearner(env, n_steps=1024)
learner.fit(logging_path='./end_to_end_planning_tmp')
```
@@ -146,9 +147,9 @@ learner.fit(logging_path='./end_to_end_planning_tmp')
Running a pretrained model:
```python
-from opendr.planning.end_to_end_planning import EndToEndPlanningRLLearner, AgiEnv
+from opendr.planning.end_to_end_planning import EndToEndPlanningRLLearner, UAVDepthPlanningEnv
-env = AgiEnv()
+env = UAVDepthPlanningEnv()
learner = EndToEndPlanningRLLearner(env)
learner.load('{$OPENDR_HOME}/src/opendr/planning/end_to_end_planning/pretrained_model/saved_model.zip')
obs = env.reset()
@@ -182,4 +183,8 @@ TABLE 2: Platform compatibility evaluation.
| x86 - Ubuntu 20.04 (CPU docker) | Pass |
| x86 - Ubuntu 20.04 (GPU docker) | Pass |
| NVIDIA Jetson TX2 | Pass |
-| NVIDIA Jetson Xavier AGX | Pass |
\ No newline at end of file
+| NVIDIA Jetson Xavier AGX | Pass |
+
+#### References
+[1] Ugurlu, H.I.; Pham, X.H.; Kayacan, E. Sim-to-Real Deep Reinforcement Learning for Safe End-to-End Planning of Aerial Robots. Robotics 2022, 11, 109.
+[DOI](https://doi.org/10.3390/robotics11050109). [GitHub](https://github.com/open-airlab/gym-depth-planning.git)
\ No newline at end of file
diff --git a/docs/reference/face-detection-2d-retinaface.md b/docs/reference/face-detection-2d-retinaface.md
index 976c60e26d..da160df163 100644
--- a/docs/reference/face-detection-2d-retinaface.md
+++ b/docs/reference/face-detection-2d-retinaface.md
@@ -167,17 +167,17 @@ Parameters:
If True, maximum verbosity if enabled.
- **url**: *str, default=OpenDR FTP URL*\
URL of the FTP server.
-
+
#### Examples
* **Training example**.
- To train properly, the backbone weights are downloaded automatically in the `temp_path`.
+ To train properly, the backbone weights are downloaded automatically in the `temp_path`.
The WIDER Face detection dataset is supported for training, implemented as a `DetectionDataset` subclass. This example assumes the data has been downloaded and placed in the directory referenced by `data_root`.
```python
from opendr.perception.object_detection_2d import RetinaFaceLearner, WiderFaceDataset
from opendr.engine.datasets import ExternalDataset
-
+
dataset = WiderFaceDataset(root=data_root, splits=['train'])
face_learner = RetinaFaceLearner(backbone='resnet', prefix='retinaface_resnet50',
@@ -189,7 +189,7 @@ Parameters:
face_learner.fit(dataset, val_dataset=dataset, verbose=True)
face_learner.save('./trained_models/retinaface_resnet50')
```
-
+
Custom datasets are supported by inheriting the `DetectionDataset` class.
* **Inference and result drawing example on a test .jpg image using OpenCV.**
@@ -208,7 +208,7 @@ Parameters:
img = draw_bounding_boxes(img.opencv(), bounding_boxes, learner.classes, show=True)
```
-
+
#### Performance Evaluation
In terms of speed, the performance of RetinaFace is summarized in the table below (in FPS).
@@ -223,12 +223,12 @@ The measurement was made on a Jetson TX2 module.
| Variant | Memory (MB) | Energy (Joules) - Total per inference |
|-------------------|---------|-------|
-| RetinaFace | 4443 | 21.83 |
+| RetinaFace | 4443 | 21.83 |
| RetinaFace-MobileNet | 4262 | 8.73 |
Finally, we measure the recall on the WIDER face validation subset at 87.83%.
Note that RetinaFace can make use of image pyramids and horizontal flipping to achieve even better recall at the cost of additional computations.
-For the MobileNet version, recall drops to 77.81%.
+For the MobileNet version, recall drops to 77.81%.
The platform compatibility evaluation is also reported below:
@@ -242,8 +242,8 @@ The platform compatibility evaluation is also reported below:
| NVIDIA Jetson TX2 | :heavy_check_mark: |
| NVIDIA Jetson Xavier AGX | :heavy_check_mark: |
| NVIDIA Jetson Xavier NX | :heavy_check_mark: |
-
+
#### References
[1] RetinaFace: Single-stage Dense Face Localisation in the Wild,
[arXiv](https://arxiv.org/abs/1905.00641).
-
+
diff --git a/docs/reference/fall-detection.md b/docs/reference/fall-detection.md
index 3d535a633c..567ff89993 100644
--- a/docs/reference/fall-detection.md
+++ b/docs/reference/fall-detection.md
@@ -5,9 +5,18 @@ The *fall_detection* module contains the *FallDetectorLearner* class, which inhe
### Class FallDetectorLearner
Bases: `engine.learners.Learner`
-The *FallDetectorLearner* class contains the implementation of a naive fall detector algorithm.
+The *FallDetectorLearner* class contains the implementation of a rule-based fall detector algorithm.
It can be used to perform fall detection on images (inference) using a pose estimator.
+This rule-based method can provide **cheap and fast** fall detection capabilities when pose estimation
+is already being used. Its inference time cost is ~0.1% of pose estimation, adding negligible overhead.
+
+However, it **has known limitations** due to its nature. Working with 2D poses means that depending on the
+orientation of the person, it cannot detect most fallen poses that face the camera.
+Another example of known false-positive detection occurs when a person is sitting with their knees
+detectable, but ankles obscured or undetectable, this however is critical for detecting a fallen person
+whose ankles are not visible.
+
The [FallDetectorLearner](/src/opendr/perception/fall_detection/fall_detector_learner.py) class has the
following public methods:
diff --git a/docs/reference/fmp_gmapping.md b/docs/reference/fmp_gmapping.md
index 6df53abfa1..913bd88609 100644
--- a/docs/reference/fmp_gmapping.md
+++ b/docs/reference/fmp_gmapping.md
@@ -3,9 +3,9 @@
Traditional *SLAM* algorithm for estimating a robot's position and a 2D, grid-based map of the environment from planar LiDAR scans.
Based on OpenSLAM GMapping, with additional functionality for computing the closed-form Full Map Posterior Distribution.
-For more details on the launchers and tools, see the [FMP_Eval Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/README.md).
+For more details on the launchers and tools, see the [FMP_Eval Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/README.md).
-For more details on the actual SLAM algorithm and its ROS node wrapper, see the [SLAM_GMapping Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/slam_gmapping/README.md).
+For more details on the actual SLAM algorithm and its ROS node wrapper, see the [SLAM_GMapping Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/slam_gmapping/README.md).
## Demo Usage
A demo ROSBag for a square corridor can be found in the Map Simulator submodule in `src/map_simulator/rosbags/`, as well as preconfigured ***roslaunch***
@@ -25,4 +25,4 @@ This will start the following processes and nodes:
Other ROSBags can be easily generated with the map simulator script from either new custom scenarios, or from the test configuration files in `src/map_simulator/scenarios/robots/` directory.
-For more information on how to define custom test scenarios and converting them to ROSBags, see the [Map_Simulator Readme](../../projects/perception/slam/full_map_posterior_gmapping/src/map_simulator/README.md).
\ No newline at end of file
+For more information on how to define custom test scenarios and converting them to ROSBags, see the [Map_Simulator Readme](../../projects/python/perception/slam/full_map_posterior_gmapping/src/map_simulator/README.md).
\ No newline at end of file
diff --git a/docs/reference/gem.md b/docs/reference/gem.md
index 27e19ae9b7..88826b60f1 100644
--- a/docs/reference/gem.md
+++ b/docs/reference/gem.md
@@ -216,8 +216,8 @@ Parameters:
#### Demo and Tutorial
-An inference [demo](../../projects/perception/object_detection_2d/gem/inference_demo.py) and
-[tutorial](../../projects/perception/object_detection_2d/gem/inference_tutorial.ipynb) are available.
+An inference [demo](../../projects/python/perception/object_detection_2d/gem/inference_demo.py) and
+[tutorial](../../projects/python/perception/object_detection_2d/gem/inference_tutorial.ipynb) are available.
#### Examples
diff --git a/docs/reference/high-resolution-pose-estimation.md b/docs/reference/high-resolution-pose-estimation.md
new file mode 100644
index 0000000000..b0128397ee
--- /dev/null
+++ b/docs/reference/high-resolution-pose-estimation.md
@@ -0,0 +1,358 @@
+## high_resolution_pose_estimation module
+
+The *high_resolution_pose_estimation* module contains the *HighResolutionPoseEstimationLearner* class, which inherits from the abstract class *Learner*.
+
+### Class HighResolutionPoseEstimationLearner
+Bases: `engine.learners.Learner`
+
+The *HighResolutionLightweightOpenPose* class is an implementation for pose estimation in high resolution images.
+This method creates a heatmap of a resized version of the input image.
+Using this heatmap, the input image is cropped keeping the area of interest and then it is used for pose estimation.
+Since the high resolution pose estimation method is based on the Lightweight OpenPose algorithm, the models that can be used have to be trained with the Lightweight OpenPose tool.
+
+In this method there are two important variables which are responsible for the increase in speed and accuracy in high resolution images.
+These variables are *first_pass_height* and *second_pass_height* which define how the image is resized in this procedure.
+
+The [HighResolutionPoseEstimationLearner](/src/opendr/perception/pose_estimation/hr_pose_estimation/high_resolution_learner.py) class has the following public methods:
+
+#### `HighResolutionPoseEstimationLearner` constructor
+```python
+HighResolutionPoseEstimationLearner(self, device, backbone, temp_path, mobilenet_use_stride, mobilenetv2_width, shufflenet_groups, num_refinement_stages, batches_per_iter, base_height, first_pass_height, second_pass_height, percentage_arround_crop, heatmap_threshold, experiment_name, num_workers, weights_only, output_name, multiscale, scales, visualize, img_mean, img_scale, pad_value, half_precision)
+```
+
+Constructor parameters:
+
+- **device**: *{'cpu', 'cuda'}, default='cuda'*\
+ Specifies the device to be used.
+- **backbone**: *{'mobilenet, 'mobilenetv2', 'shufflenet'}, default='mobilenet'*\
+ Specifies the backbone architecture.
+- **temp_path**: *str, default='temp'*\
+ Specifies a path where the algorithm looks for pretrained backbone weights, the checkpoints are saved along with the logging files.
+ Moreover the JSON file that contains the evaluation detections is saved here.
+- **mobilenet_use_stride**: *bool, default=True*\
+ Whether to add a stride value in the mobilenet model, which reduces accuracy but increases inference speed.
+- **mobilenetv2_width**: *[0.0 - 1.0], default=1.0*\
+ If the mobilenetv2 backbone is used, this parameter specifies its size.
+- **shufflenet_groups**: *int, default=3*\
+ If the shufflenet backbone is used, it specifies the number of groups to be used in grouped 1x1 convolutions in each ShuffleUnit.
+- **num_refinement_stages**: *int, default=2*\
+ Specifies the number of pose estimation refinement stages are added on the model's head, including the initial stage.
+- **batches_per_iter**: *int, default=1*\
+ Specifies per how many batches a backward optimizer step is performed.
+- **base_height**: *int, default=256*\
+ Specifies the height, based on which the images will be resized before performing the forward pass when using the Lightweight OpenPose.
+- **first_pass_height**: *int, default=360*\
+ Specifies the height that the input image will be resized during the heatmap generation procedure.
+- **second_pass_height**: *int, default=540*\
+ Specifies the height of the image on the second inference for pose estimation procedure.
+- **percentage_arround_crop**: *float, default=0.3*\
+ Specifies the percentage of an extra pad arround the cropped image
+- **heatmap_threshold**: *float, default=0.1*\
+ Specifies the threshold value that the heatmap elements should have during the first pass in order to trigger the second pass
+- **experiment_name**: *str, default='default'*\
+ String name to attach to checkpoints.
+- **num_workers**: *int, default=8*\
+ Specifies the number of workers to be used by the data loader.
+- **weights_only**: *bool, default=True*\
+ If True, only the model weights will be loaded; it won't load optimizer, scheduler, num_iter, current_epoch information.
+- **output_name**: *str, default='detections.json'*\
+ The name of the json file where the evaluation detections are stored, inside the temp_path.
+- **multiscale**: *bool, default=False*\
+ Specifies whether evaluation will run in the predefined multiple scales setup or not.
+ It overwrites self.scales to [0.5, 1.0, 1.5, 2.0].
+- **scales**: *list, default=None*\
+ A list of integer scales that define the multiscale evaluation setup.
+ Used to manually set the scales instead of going for the predefined multiscale setup.
+- **visualize**: *bool, default=False*\
+ Specifies whether the images along with the poses will be shown, one by one, during evaluation.
+- **img_mean**: *list, default=(128, 128, 128)]*\
+ Specifies the mean based on which the images are normalized.
+- **img_scale**: *float, default=1/256*\
+ Specifies the scale based on which the images are normalized.
+- **pad_value**: *list, default=(0, 0, 0)*\
+ Specifies the pad value based on which the images' width is padded.
+- **half_precision**: *bool, default=False*\
+ Enables inference using half (fp16) precision instead of single (fp32) precision. Valid only for GPU-based inference.
+
+
+#### `HighResolutionPoseEstimationLearner.eval`
+```python
+HighResolutionPoseEstimationLearner.eval(self, dataset, silent, verbose, use_subset, subset_size, images_folder_name, annotations_filename)
+```
+
+This method is used to evaluate a trained model on an evaluation dataset.
+Returns a dictionary containing statistics regarding evaluation.
+
+Parameters:
+
+- **dataset**: *object*\
+ Object that holds the evaluation dataset.
+ Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
+- **silent**: *bool, default=False*\
+ If set to True, disables all printing of evaluation progress reports and other information to STDOUT.
+- **verbose**: *bool, default=True*\
+ If set to True, enables the maximum verbosity.
+- **use_subset**: *bool, default=True*\
+ If set to True, a subset of the validation dataset is created and used in evaluation.
+- **subset_size**: *int, default=250*\
+ Controls the size of the validation subset.
+- **images_folder_name**: *str, default='val2017'*\
+ Folder name that contains the dataset images.
+ This folder should be contained in the dataset path provided.
+ Note that this is a folder name, not a path.
+- **annotations_filename**: *str, default='person_keypoints_val2017.json'*\
+ Filename of the annotations JSON file.
+ This file should be contained in the dataset path provided.
+
+#### `HighResolutionPoseEstimation.infer`
+```python
+HighResolutionPoseEstimation.infer(self, img, upsample_ratio, stride, track, smooth, multiscale, visualize)
+```
+
+This method is used to perform pose estimation on an image.
+Returns a list of `engine.target.Pose` objects, where each holds a pose, or returns an empty list if no detection were made.
+
+Parameters:
+
+- **img**: *object***\
+ Object of type engine.data.Image.
+- **upsample_ratio**: *int, default=4*\
+ Defines the amount of upsampling to be performed on the heatmaps and PAFs when resizing.
+- **stride**: *int, default=8*\
+ Defines the stride value for creating a padded image.
+- **track**: *bool, default=True*\
+ If True, infer propagates poses ids from previous frame results to track poses.
+- **smooth**: *bool, default=True*\
+ If True, smoothing is performed on pose keypoints between frames.
+- **multiscale**: *bool, default=False*\
+ Specifies whether evaluation will run in the predefined multiple scales setup or not.
+
+
+
+#### `HighResolutionPoseEstimationLearner.__first_pass`
+```python
+HighResolutionPoseEstimationLearner.__first_pass(self, img)
+```
+
+This method is used for extracting a heatmap from the input image about human locations in the picture.
+
+Parameters:
+
+- **img**: *object***\
+ Object of type engine.data.Image.
+
+
+#### `HighResolutionPoseEstimationLearner.__second_pass`
+```python
+HighResolutionPoseEstimationLearner.__second_pass(self, img, net_input_height_size, max_width, stride, upsample_ratio, pad_value, img_mean, img_scale)
+```
+
+On this method the second inference step is carried out, which estimates the human poses on the image that is provided.
+Following the steps of the proposed method this image should be the cropped part of the initial high resolution image that came out from taking into account the area of interest of the heatmap generated.
+
+Parameters:
+
+- **img**: *object***\
+ Object of type engine.data.Image.
+- **net_input_height_size**: *int*\
+ It is the height that is used for resizing the image on the pose estimation procedure.
+- **max_width**: *int*\
+ It is the max width that the cropped image should have in order to keep the height-width ratio below a certain value.
+- **stride**: *int*\
+ Is the stride value of mobilenet which reduces accuracy but increases inference speed.
+- **upsample_ratio**: *int, default=4*\
+ Defines the amount of upsampling to be performed on the heatmaps and PAFs when resizing.
+- **pad_value**: *list, default=(0, 0, 0)*\
+ Specifies the pad value based on which the images' width is padded.
+- **img_mean**: *list, default=(128, 128, 128)]*\
+ Specifies the mean based on which the images are normalized.
+- **img_scale**: *float, default=1/256*\
+ Specifies the scale based on which the images are normalized.
+
+
+#### `HighResolutionPoseEstimation.download`
+```python
+HighResolutionPoseEstimation.download(self, path, mode, verbose, url)
+```
+
+Download utility for various Lightweight Open Pose components.
+Downloads files depending on mode and saves them in the path provided.
+It supports downloading:
+1. the default mobilenet pretrained model
+2. mobilenet, mobilenetv2 and shufflenet weights needed for training
+3. a test dataset with a single COCO image and its annotation
+
+Parameters:
+
+- **path**: *str, default=None*\
+ Local path to save the files, defaults to self.temp_path if None.
+- **mode**: *str, default="pretrained"*\
+ What file to download, can be one of "pretrained", "weights", "test_data"
+- **verbose**: *bool, default=False*\
+ Whether to print messages in the console.
+- **url**: *str, default=OpenDR FTP URL*\
+ URL of the FTP server.
+
+#### `HighResolutionPoseEstimation.load`
+```python
+HighResolutionPoseEstimation.load(self, path, verbose)
+```
+This method is used to load a pretrained model that has trained with Lightweight OpenPose. The model is loaded from inside the directory of the path provided, using the metadata .json file included.
+
+Parameters:
+- **path**: *str*\
+ Path of the model to be loaded.
+- **verbose**: *bool, default=False*\
+ If set to True, prints a message on success.
+
+
+#### Examples
+
+* **Inference and result drawing example on a test .jpg image using OpenCV.**
+ ```python
+ import cv2
+ from opendr.perception.pose_estimation import HighResolutionPoseEstimationLearner
+ from opendr.perception.pose_estimation import draw
+ from opendr.engine.data import Image
+
+ pose_estimator = HighResolutionPoseEstimationLearner(device='cuda', num_refinement_stages=2,
+ mobilenet_use_stride=False, half_precision=False,
+ first_pass_height=360,
+ second_pass_height=540)
+ pose_estimator.download() # Download the default pretrained mobilenet model in the temp_path
+
+ pose_estimator.load("./parent_dir/openpose_default")
+ pose_estimator.download(mode="test_data") # Download a test data taken from COCO2017
+
+ img = Image.open('./parent_dir/dataset/image/000000000785_1080.jpg')
+ orig_img = img.opencv() # Keep original image
+ current_poses = pose_estimator.infer(img)
+ img_opencv = img.opencv()
+ for pose in current_poses:
+ draw(img_opencv, pose)
+ img_opencv = cv2.addWeighted(orig_img, 0.6, img_opencv, 0.4, 0)
+ cv2.imshow('Result', img_opencv)
+ cv2.waitKey(0)
+ ```
+
+
+#### Performance Evaluation
+
+
+In order to check the performance of the *HighResolutionPoseEstimationLearner* it has been tested on various platforms and with different optimizations that Lightweight OpenPose offers.
+The experiments are conducted on a 1080p image.
+
+
+#### Lightweight OpenPose With resizing on 256 pixels
+| **Method** | **CPU i7-9700K (FPS)** | **RTX 2070 (FPS)** | **Jetson TX2 (FPS)** | **Xavier NX (FPS)** |
+|:------------------------------------------------:|-----------------------|-------------------|----------------------|---------------------|
+| OpenDR - Baseline | 0.9 | 46.3 | 4.6 | 6.4 |
+| OpenDR - Full | 2.9 | 83.1 | 11.2 | 13.5 |
+
+
+#### Lightweight OpenPoseWithout resizing
+| Method | CPU i7-9700K (FPS) | RTX 2070 (FPS) | Jetson TX2 (FPS) | Xavier NX (FPS) |
+|-------------------|--------------------|-----------------|------------------|-----------------|
+| OpenDR - Baseline | 0.05 | 2.6 | 0.3 | 0.5 |
+| OpenDR - Full | 0.2 | 10.8 | 1.4 | 3.1 |
+
+
+#### High-Resolution Pose Estimation
+| Method | CPU i7-9700K (FPS) | RTX 2070 (FPS) | Jetson TX2 (FPS) | Xavier NX (FPS) |
+|------------------------|--------------------|----------------|------------------|-----------------|
+| HRPoseEstim - Baseline | 2.3 | 13.6 | 1.4 | 1.8 |
+| HRPoseEstim - Half | 2.7 | 16.1 | 1.3 | 3.0 |
+| HRPoseEstim - Stride | 8.1 | 27.0 | 4 | 4.9 |
+| HRPoseEstim - Stages | 3.7 | 16.5 | 1.9 | 2.7 |
+| HRPoseEstim - H+S | 8.2 | 25.9 | 3.6 | 5.5 |
+| HRPoseEstim - Full | 10.9 | 31.7 | 4.8 | 6.9 |
+
+As it is shown in the previous tables, OpenDR Lightweight OpenPose achieves higher FPS when it is resizing the input image into 256 pixels.
+It is easier to process that image, but as it is shown in the next tables the method falls apart when it comes to accuracy and there are no detections.
+
+We have evaluated the effect of using different inference settings, namely:
+- *HRPoseEstim - Baseline*, which refers to directly using the High Resolution Pose Estimation method,which is based in Lightweight OpenPose,
+- *HRPoseEstim - Half*, which refers to enabling inference in half (FP) precision,
+- *HRPoseEstim - Stride*, which refers to increasing stride by two in the input layer of the model,
+- *HRPoseEstim - Stages*, which refers to removing the refinement stages,
+- *HRPoseEstim - H+S*, which uses both half precision and increased stride, and
+- *HRPoseEstim - Full*, which refers to combining all three available optimization.
+was used as input to the models.
+
+The average precision and average recall on the COCO evaluation split is also reported in the tables below:
+
+
+#### Lightweight OpenPose with resizing
+| Method | Average Precision (IoU=0.50) | Average Recall (IoU=0.50) |
+|-------------------|------------------------------|---------------------------|
+| OpenDR - Baseline | 0.101 | 0.267 |
+ | OpenDR - Full | 0.031 | 0.044 |
+
+
+
+
+#### Lightweight OpenPose without resizing
+| Method | Average Precision (IoU=0.50) | Average Recall (IoU=0.50) |
+|-------------------|------------------------------|---------------------------|
+| OpenDR - Baseline | 0.695 | 0.749 |
+| OpenDR - Full | 0.389 | 0.441 |
+
+
+
+#### High Resolution Pose Estimation
+| Method | Average Precision (IoU=0.50) | Average Recall (IoU=0.50) |
+|------------------------|------------------------------|---------------------------|
+| HRPoseEstim - Baseline | 0.615 | 0.637 |
+| HRPoseEstim - Half | 0.604 | 0.621 |
+| HRPoseEstim - Stride | 0.262 | 0.274 |
+| HRPoseEstim - Stages | 0.539 | 0.562 |
+| HRPoseEstim - H+S | 0.254 | 0.267 |
+| HRPoseEstim - Full | 0.259 | 0.272 |
+
+The average precision and the average recall have been calculated on a 1080p version of COCO2017 validation dataset and the results are reported in the table below:
+
+| Method | Average Precision (IoU=0.50) | Average Recall (IoU=0.50) |
+|-------------------|------------------------------|---------------------------|
+| HRPoseEstim - Baseline | 0.518 | 0.536 |
+| HRPoseEstim - Half | 0.509 | 0.520 |
+| HRPoseEstim - Stride | 0.143 | 0.149 |
+| HRPoseEstim - Stages | 0.474 | 0.496 |
+| HRPoseEstim - H+S | 0.134 | 0.139 |
+| HRPoseEstim - Full | 0.141 | 0.150 |
+
+For measuring the precision and recall we used the standard approach proposed for COCO, using an Intersection of Union (IoU) metric at 0.5.
+
+
+#### Notes
+
+For the metrics of the algorithm the COCO dataset evaluation scores are used as explained [here](https://cocodataset.org/#keypoints-eval).
+
+Keypoints and how poses are constructed is according to the original method described [here](https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/master/TRAIN-ON-CUSTOM-DATASET.md).
+
+Pose keypoints ids are matched as:
+
+| Keypoint ID | Keypoint name | Keypoint abbrev. |
+|------------- |---------------- |------------------ |
+| 0 | nose | nose |
+| 1 | neck | neck |
+| 2 | right shoulder | r_sho |
+| 3 | right elbow | r_elb |
+| 4 | right wrist | r_wri |
+| 5 | left shoulder | l_sho |
+| 6 | left elbow | l_elb |
+| 7 | left wrist | l_wri |
+| 8 | right hip | r_hip |
+| 9 | right knee | r_knee |
+| 10 | right ankle | r_ank |
+| 11 | left hip | l_hip |
+| 12 | left knee | l_knee |
+| 13 | left ankle | l_ank |
+| 14 | right eye | r_eye |
+| 15 | left eye | l_eye |
+| 16 | right ear | r_ear |
+| 17 | left ear | l_ear |
+
+
+#### References
+[1] OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, [arXiv](https://arxiv.org/abs/1812.08008).
+[2] Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose, [arXiv](https://arxiv.org/abs/1811.12004).
diff --git a/docs/reference/human-model-generation.md b/docs/reference/human-model-generation.md
index 8bd3997cb8..71bac046de 100644
--- a/docs/reference/human-model-generation.md
+++ b/docs/reference/human-model-generation.md
@@ -77,7 +77,7 @@ Documentation on how to use this node can be found [here](../../projects/opendr_
#### Tutorials and Demos
A demo in the form of a Jupyter Notebook is available
-[here](../../projects/simulation/human_model_generation/demos/model_generation.ipynb).
+[here](../../projects/python/simulation/human_model_generation/demos/model_generation.ipynb).
#### Example
@@ -95,8 +95,8 @@ A demo in the form of a Jupyter Notebook is available
OPENDR_HOME = os.environ["OPENDR_HOME"]
# We load a full-body image of a human as well as an image depicting its corresponding silhouette.
- rgb_img = Image.open(os.path.join(OPENDR_HOME, 'projects/simulation/human_model_generation/demos', 'imgs_input/rgb/result_0004.jpg'))
- msk_img = Image.open(os.path.join(OPENDR_HOME, 'projects/simulation/human_model_generation/demos', 'imgs_input/msk/result_0004.jpg'))
+ rgb_img = Image.open(os.path.join(OPENDR_HOME, 'projects/python/simulation/human_model_generation/demos', 'imgs_input/rgb/result_0004.jpg'))
+ msk_img = Image.open(os.path.join(OPENDR_HOME, 'projects/python/simulation/human_model_generation/demos', 'imgs_input/msk/result_0004.jpg'))
# We initialize learner. Using the infer method, we generate human 3D model.
model_generator = PIFuGeneratorLearner(device='cuda', checkpoint_dir='./temp')
diff --git a/docs/reference/image_based_facial_emotion_estimation.md b/docs/reference/image_based_facial_emotion_estimation.md
new file mode 100644
index 0000000000..11f4b8acbf
--- /dev/null
+++ b/docs/reference/image_based_facial_emotion_estimation.md
@@ -0,0 +1,331 @@
+## image_based_facial_emotion_estimation module
+
+The *image_based_facial_emotion_estimation* module contains the *FacialEmotionLearner* class, which inherits from the abstract class *Learner*.
+
+### Class FacialEmotionLearner
+Bases: `engine.learners.Learner`
+
+The *FacialEmotionLearner* class is an implementation of the state-of-the-art method ESR [[1]](#1) for efficient facial feature learning with wide ensemble-based convolutional neural networks.
+An ESR consists of two building blocks.
+(1) The base of the network is an array of convolutional layers for low- and middle-level feature learning.
+(2) These informative features are then shared with independent convolutional branches that constitute the ensemble.
+From this point, each branch can learn distinctive features while competing for a common resource - the shared layers.
+The [FacialEmotionLearner](/src/opendr/perception/facial_expression_recognition/image_based_facial_emotion_estimation/facial_emotion_learner.py) class has the following public methods:
+
+
+#### `FacialEmotionLearner` constructor
+```python
+FacialEmotionLearner(self, lr, batch_size, temp_path, device, device_ind, validation_interval,
+ max_training_epoch, momentum, ensemble_size, base_path_experiment, name_experiment, dimensional_finetune, categorical_train,
+ base_path_to_dataset, max_tuning_epoch, diversify)
+```
+
+Constructor parameters:
+
+- **lr**: *float, default=0.1*\
+ Specifies the initial learning rate to be used during training.
+- **batch_size**: *int, default=32*\
+ Specifies number of samples to be bundled up in a batch during training.
+ This heavily affects memory usage, adjust according to your system.
+- **temp_path**: *str, default='temp'*\
+ Specifies a path where the algorithm saves the checkpoints and onnx optimized model (if needed).
+- **device**: *{'cpu', 'cuda'}, default='cuda'*\
+ Specifies the device to be used.
+- **device_ind**: *list, default=[0]*\
+ List of GPU indices to be used if the device is 'cuda'.
+- **validation_interval**: *int, default=1*\
+ Specifies the validation interval.
+- **max_training_epoch**: *int, default=2*\
+ Specifies the maximum number of epochs the training should run for.
+- **momentum**: *float, default=0.9*\
+ Specifies the momentum value used for optimizer.
+- **ensemble_size**: *int, default=9*\
+ Specifies the number of ensemble branches in the model.
+- **base_path_experiment**: *str, default='./experiments/'*\
+ Specifies the path in which the experimental results will be saved.
+- **name_experiment**: *str, default='esr_9'*\
+ String name for saving checkpoints.
+- **dimensional_finetune**: *bool, default=True*\
+ Specifies if the model should be fine-tuned on dimensional data or not.
+- **categorical_train**: *bool, default=False*\
+ Specifies if the model should be trained on categorical data or not.
+- **base_path_to_dataset**: *str, default=''./data/AffectNet''*\
+ Specifies the dataset path.
+- **max_tuning_epoch**: *int, default=1*\
+ Specifies the maximum number of epochs the model should be finetuned on dimensional data.
+- **diversity**: *bool, default=False*\
+ Specifies if the learner diversifies the features of different branches or not.
+
+#### `FacialEmotionLearner.fit`
+```python
+FacialEmotionLearner.fit(self)
+```
+
+This method is used for training the algorithm on a train dataset and validating on a val dataset.
+
+
+#### `FacialEmotionLearner.eval`
+```python
+FacialEmotionLearner.eval(self, eval_type, current_branch_on_training)
+```
+
+This method is used to evaluate a trained model on an evaluation dataset.
+Returns a dictionary containing stats regarding evaluation.
+
+Parameters:
+
+- **eval_type**: *str, default='categorical'*\
+ Specifies the type of data that model is evaluated on.
+ It can be either categorical or dimensional data.
+- **current_branch_on_training**: *int, default=0*\
+ Specifies the index of trained branch which should be evaluated on validation data.
+
+
+#### `FacialEmotionLearner.init_model`
+```python
+FacialEmotionLearner.init_model(self, num_branches)
+```
+
+This method is used to initialize the model.
+
+Parameters:
+
+- **num_branches**: *int*\
+ Specifies the number of ensemble branches in the model. ESR_9 model is built by 9 branches by default.
+
+#### `FacialEmotionLearner.infer`
+```python
+FacialEmotionLearner.infer(self, input_batch)
+```
+
+This method is used to perform inference on an image or a batch of images.
+It returns dimensional emotion results and also the categorical emotion results as an object of `engine.target.Category` if a proper input object `engine.data.Image` is given.
+
+Parameters:
+
+- **input_batch**: *object***
+ Object of type `engine.data.Image`. It also can be a list of Image objects, or a Torch tensor which will be converted to Image object.
+
+#### `FacialEmotionLearner.save`
+```python
+FacialEmotionLearner.save(self, state_dicts, base_path_to_save_model)
+```
+This method is used to save a trained model.
+Provided with the path (absolute or relative), it creates the "path" directory, if it does not already exist.
+Inside this folder, the model is saved as "model_name.pt" and the metadata file as "model_name.json". If the directory already exists, the "model_name.pt" and "model_name.json" files are overwritten.
+
+If [`self.optimize`](#FacialEmotionLearner.optimize) was run previously, it saves the optimized ONNX model in a similar fashion with an ".onnx" extension, by copying it from the self.temp_path it was saved previously during conversion.
+
+Parameters:
+
+- **state_dicts**: *object*\
+ Object of type Python dictionary containing the trained model weights.
+- **base_path_to_save_model**: *str*\
+ Specifies the path in which the model will be saved.
+
+#### `FacialEmotionLearner.load`
+```python
+FacialEmotionLearner.load(self, ensemble_size, path_to_saved_network, file_name_base_network,
+ file_name_conv_branch, fix_backbone)
+```
+
+Loads the model from inside the directory of the path provided, using the metadata .json file included.
+
+Parameters:
+
+- **ensemble_size**: *int, default=9*\
+ Specifies the number of ensemble branches in the model for which the pretrained weights should be loaded.
+- **path_to_saved_network**: *str, default="./trained_models/esr_9"*\
+ Path of the model to be loaded.
+- **file_name_base_network**: *str, default="Net-Base-Shared_Representations.pt"*\
+ The file name of the base network to be loaded.
+- **file_name_conv_branch**: *str, default="Net-Branch_{}.pt"*\
+ The file name of the ensemble branch network to be loaded.
+- **fix_backbone**: *bool*\
+ If true, all the model weights except the classifier are fixed so that the last layers' weights are fine-tuned on dimensional data.
+ Otherwise, all the model weights will be trained from scratch.
+
+
+#### `FacialEmotionLearner.optimize`
+```python
+FacialEmotionLearner.optimize(self, do_constant_folding)
+```
+
+This method is used to optimize a trained model to ONNX format which can be then used for inference.
+
+Parameters:
+
+- **do_constant_folding**: *bool, default=False*\
+ ONNX format optimization.
+ If True, the constant-folding optimization is applied to the model during export.
+
+
+#### `FacialEmotionLearner.download`
+```python
+@staticmethod
+FacialEmotionLearner.download(self, path, mode, url)
+```
+
+Downloads data and saves them in the path provided.
+
+Parameters:
+
+- **path**: *str, default=None*\
+ Local path to save the files, defaults to `self.temp_dir` if None.
+- **mode**: *str, default="data"*\
+ What file to download, can be "data".
+- **url**: *str, default=opendr FTP URL*\
+ URL of the FTP server.
+
+
+#### Data preparation
+ Download the [AffectNet](http://mohammadmahoor.com/affectnet/) [[2]](https://www.computer.org/csdl/magazine/mu/2012/03/mmu2012030034/13rRUxjQyrW) dataset, and organize it in the following structure:
+ ```
+ AffectNet/
+ Training_Labeled/
+ 0/
+ 1/
+ ...
+ n/
+ Training_Unlabeled/
+ 0/
+ 1/
+ ...
+ n/
+ Validation/
+ 0/
+ 1/
+ ...
+ n/
+ ```
+ In order to do that, you need to run the following function:
+ ```python
+ from opendr.perception.facial_expression_recognition.image_based_facial_emotion_estimation.algorithm.utils import datasets
+ datasets.pre_process_affect_net(base_path_to_images, base_path_to_annotations, base_destination_path, set_index)
+ ```
+ This pre-processes the AffectNet dataset by cropping and resizing the images into 96 x 96 pixels, and organizing them in folders with 500 images each.
+ Each image is renamed to follow the pattern "[id][emotion_idx][valence times 1000]_[arousal times 1000].jpg".
+
+#### Pre-trained models
+
+The pretrained models on AffectNet Categorical dataset are provided by [[1]](#1) which can be found [here](https://github.com/siqueira-hc/Efficient-Facial-Feature-Learning-with-Wide-Ensemble-based-Convolutional-Neural-Networks/tree/master/model/ml/trained_models/esr_9).
+**Please note that the pretrained weights cannot be used for commercial purposes.**
+
+#### Examples
+
+* **Train the ensemble model on AffectNet Categorical dataset and then fine-tune it on the AffectNet dimensional dataset**
+ The training and evaluation dataset should be present in the path provided.
+ The `batch_size` argument should be adjusted according to available memory.
+
+ ```python
+ from opendr.perception.facial_expression_recognition import FacialEmotionLearner
+
+ learner = FacialEmotionLearner(device="cpu", temp_path='./tmp',
+ batch_size=2, max_training_epoch=1, ensemble_size=1,
+ name_experiment='esr_9', base_path_experiment='./experiments/',
+ lr=1e-1, categorical_train=True, dimensional_finetune=True,
+ base_path_to_dataset='./data', max_tuning_epoch=1)
+ learner.fit()
+ learner.save(state_dicts=learner.model.to_state_dict(),
+ base_path_to_save_model=learner.base_path_experiment,
+ current_branch_save=8)
+ ```
+
+* **Inference on a batch of images**
+ ```python
+ from opendr.perception.facial_expression_recognition import FacialEmotionLearner
+ from torch.utils.data import DataLoader
+
+ learner = FacialEmotionLearner(device="cpu", temp_path='./tmp',
+ batch_size=2, max_training_epoch=1, ensemble_size=1,
+ name_experiment='esr_9', base_path_experiment='./experiments/',
+ lr=1e-1, categorical_train=True, dimensional_finetune=True,
+ base_path_to_dataset='./data', max_tuning_epoch=1)
+
+ # Download the validation data
+ dataset_path = learner.download(mode='data')
+ val_data = datasets.AffectNetCategorical(idx_set=2,
+ max_loaded_images_per_label=2,
+ transforms=None,
+ is_norm_by_mean_std=False,
+ base_path_to_affectnet=learner.dataset_path)
+
+ val_loader = DataLoader(val_data, batch_size=32, shuffle=False, num_workers=8)
+ batch = next(iter(val_loader))[0]
+ learner.load(learner.ensemble_size, path_to_saved_network=learner.base_path_experiment, fix_backbone=True)
+ ensemble_emotion_results, ensemble_dimension_results = learner.infer(batch[0])
+ ```
+
+* **Optimization example for a previously trained model**
+ Inference can be run with the trained model after running self.optimize.
+ ```python
+ from opendr.perception.facial_expression_recognition import FacialEmotionLearner
+
+ learner = FacialEmotionLearner(device="cpu", temp_path='./tmp',
+ batch_size=2, max_training_epoch=1, ensemble_size=1,
+ name_experiment='esr_9', base_path_experiment='./experiments/',
+ lr=1e-1, categorical_train=True, dimensional_finetune=True,
+ base_path_to_dataset='./data', max_tuning_epoch=1)
+
+
+ learner.load(learner.ensemble_size, path_to_saved_network=learner.base_path_experiment, fix_backbone=True)
+ learner.optimize(do_constant_folding=True)
+ learner.save(path='./parent_dir/optimized_model', model_name='optimized_pstbln')
+ ```
+
+
+#### Performance Evaluation
+
+The tests were conducted on the following computational devices:
+- Intel(R) Xeon(R) Gold 6230R CPU on server
+- Nvidia Jetson TX2
+- Nvidia Jetson Xavier AGX
+- Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors
+
+
+Inference time is measured as the time taken to transfer the input to the model (e.g., from CPU to GPU), run inference using the algorithm, and return results to CPU.
+The ESR and its extension diversified_ESR denoted as ESR*, which learns diversified feature representations to improve the model generalisation, are implemented in *FacialEmotionLearner*.
+The ESR-n and ESR*-n denote the ESR and diversified-ESR models with #n ensemble branches, respectively
+
+The model can receive either single images as input or a video, which can be captured by webcam, and perform the prediction frame-by-frame.
+
+We report speed (single sample per inference) as the mean of 100 runs, and the energy (Joules) on embedded devices.
+The noted memory is the maximum allocated memory on GPU during inference.
+
+| Method | Acc. (%) | Params (M) | Mem. (MB) |
+|--------------|----------|------------|-----------|
+| ESR-9 | 87.17 | 20.35 | 402.99 |
+| ESR-15 | 88.59 | 33.67 | 455.61 |
+| ESR*-9 | 89.15 | 20.83 | 406.83 |
+| ESR*-15 | 89.34 | 34.47 | 460.73 |
+
+The inference speed (evaluations/second) of both learners on various computational devices are as follows:
+
+| Method | CPU | Jetson TX2 | Jetson Xavier | RTX 2080 Ti |
+|--------------|-------|------------|---------------|-------------|
+| ESR-9 | 22.23 | 27.08 | 28.79 | 117.91 |
+| ESR-15 | 13.86 | 17.76 | 18.17 | 91.78 |
+| ESR*-9 | 5.24 | 6.60 | 12.45 | 33.40 |
+| ESR*-15 | 3.38 | 4.18 | 8.47 | 20.57 |
+
+Energy (Joules) of both learners’ inference on embedded devices is shown in the following:
+
+| Method | Jetson TX2 | Jetson Xavier |
+|---------|------------|---------------|
+| ESR-9 | 0.96 | 0.67 |
+| ESR-15 | 1.16 | 0.93 |
+| ESR*-9 | 3.38 | 1.41 |
+| ESR*-15 | 6.26 | 2.51 |
+
+
+
+
+## References
+
+[1]
+[Siqueira, Henrique, Sven Magg, and Stefan Wermter. "Efficient facial feature learning with wide ensemble-based convolutional neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.](
+https://ojs.aaai.org/index.php/AAAI/article/view/6037)
+
+[2]
+[Mollahosseini, Ali, Behzad Hasani, and Mohammad H. Mahoor. "Affectnet: A database for facial expression, valence, and arousal computing in the wild." IEEE Transactions on Affective Computing 10.1 (2017): 18-31.](
+https://ieeexplore.ieee.org/abstract/document/8013713)
diff --git a/docs/reference/images/hand_gesture_examples.png b/docs/reference/images/hand_gesture_examples.png
new file mode 100644
index 0000000000..b9d0a88d55
Binary files /dev/null and b/docs/reference/images/hand_gesture_examples.png differ
diff --git a/docs/reference/index.md b/docs/reference/index.md
index b7061793da..cf0ea9c1ff 100644
--- a/docs/reference/index.md
+++ b/docs/reference/index.md
@@ -1,6 +1,6 @@
# OpenDR Toolkit Reference Manual
-*Release 1.1*
+*Release 2.0.0*
@@ -16,6 +16,8 @@ Neither the copyright holder nor any applicable licensor will be liable for any
## Table of Contents
+- [Installation](/docs/reference/installation.md)
+- [Customization](/docs/reference/customize.md)
- Inference and Training API
- `engine` Module
- [engine.data Module](engine-data.md)
@@ -26,29 +28,36 @@ Neither the copyright holder nor any applicable licensor will be liable for any
- [face_recognition_learner Module](face-recognition.md)
- facial expression recognition:
- [landmark_based_facial_expression_recognition](landmark-based-facial-expression-recognition.md)
+ - [image_based_facial_emotion_estimation](image_based_facial_emotion_estimation.md)
- pose estimation:
- [lightweight_open_pose Module](lightweight-open-pose.md)
+ - [high_resolution_pose_estimation Module](high-resolution-pose-estimation.md)
- activity recognition:
- - [activity_recognition Module](activity-recognition.md)
- - action recognition:
- - [skeleton_based_action_recognition](skeleton-based-action-recognition.md)
+ - [skeleton-based action recognition](skeleton-based-action-recognition.md)
+ - [continual skeleton-based action recognition Module](skeleton-based-action-recognition.md#class-costgcnlearner)
+ - [x3d Module](activity-recognition.md#class-x3dlearner)
+ - [continual x3d Module](activity-recognition.md#class-cox3dlearner)
+ - [continual transformer encoder Module](continual-transformer-encoder.md)
- speech recognition:
- [matchboxnet Module](matchboxnet.md)
- [edgespeechnets Module](edgespeechnets.md)
- [quadraticselfonn Module](quadratic-selfonn.md)
- object detection 2d:
+ - [nanodet Module](nanodet.md)
- [detr Module](detr.md)
- [gem Module](gem.md)
- [retinaface Module](face-detection-2d-retinaface.md)
- [centernet Module](object-detection-2d-centernet.md)
- [ssd Module](object-detection-2d-ssd.md)
- [yolov3 Module](object-detection-2d-yolov3.md)
+ - [yolov5 Module](object-detection-2d-yolov5.md)
- [seq2seq-nms Module](object-detection-2d-nms-seq2seq_nms.md)
- object detection 3d:
- [voxel Module](voxel-object-detection-3d.md)
- object tracking 2d:
- [fair_mot Module](object-tracking-2d-fair-mot.md)
- [deep_sort Module](object-tracking-2d-deep-sort.md)
+ - [siamrpn Module](object-tracking-2d-siamrpn.md)
- object tracking 3d:
- [ab3dmot Module](object-tracking-3d-ab3dmot.md)
- multimodal human centric:
@@ -77,9 +86,10 @@ Neither the copyright holder nor any applicable licensor will be liable for any
- [human_model_generation Module](human-model-generation.md)
- `utils` Module
- [Hyperparameter Tuning Module](hyperparameter_tuner.md)
+ - [Ambiguity Measure Module](ambiguity_measure.md)
- `Stand-alone Utility Frameworks`
- [Engine Agnostic Gym Environment with Reactive extension (EAGERx)](eagerx.md)
-- [ROSBridge Package](rosbridge.md)
+- [ROS Bridge Package](opendr-ros-bridge.md)
- [C Inference API](c-api.md)
- [data.h](c-data-h.md)
- [target.h](c-target-h.md)
@@ -89,48 +99,54 @@ Neither the copyright holder nor any applicable licensor will be liable for any
- `C API` Module
- [face recognition Demo](/projects/c_api)
- `control` Module
- - [mobile_manipulation Demo](/projects/control/mobile_manipulation)
- - [single_demo_grasp Demo](/projects/control/single_demo_grasp)
+ - [mobile_manipulation Demo](/projects/python/control/mobile_manipulation)
+ - [single_demo_grasp Demo](/projects/python/control/single_demo_grasp)
- `opendr workspace` Module
- [opendr_ws](/projects/opendr_ws)
- `perception` Module
- activity recognition:
- - [activity_recognition Demo](/projects/perception/activity_recognition/demos/online_recognition)
+ - [activity_recognition Demo](/projects/python/perception/activity_recognition/demos/online_recognition)
- face recognition:
- - [face_recognition_Demo](/projects/perception/face_recognition)
+ - [face_recognition_Demo](/projects/python/perception/face_recognition)
- facial expression recognition:
- - [landmark_based_facial_expression_recognition Demo](/projects/perception/facial_expression_recognition/landmark_based_facial_expression_recognition)
+ - [landmark_based_facial_expression_recognition Demo](/projects/python/perception/facial_expression_recognition/landmark_based_facial_expression_recognition)
+ - [image_based_facial_emotion_estimation Demo](/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation)
- heart anomaly detection:
- - [heart anomaly detection Demo](/projects/perception/heart_anomaly_detection)
+ - [heart anomaly detection Demo](/projects/python/perception/heart_anomaly_detection)
- pose estimation:
- - [lightweight_open_pose Demo](/projects/perception/lightweight_open_pose)
+ - [lightweight_open_pose Demo](/projects/python/perception/pose_estimation/lightweight_open_pose)
+ - [high_resolution_pose_estimation Demo](/projects/python/perception/pose_estimation/high_resolution_pose_estimation)
- multimodal human centric:
- - [rgbd_hand_gesture_learner Demo](/projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition)
- - [audiovisual_emotion_recognition Demo](/projects/perception/multimodal_human_centric/audiovisual_emotion_recognition)
+ - [rgbd_hand_gesture_learner Demo](/projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition)
+ - [audiovisual_emotion_recognition Demo](/projects/python/perception/multimodal_human_centric/audiovisual_emotion_recognition)
- object detection 2d:
- - [detr Demo](/projects/perception/object_detection_2d/detr)
- - [gem Demo](/projects/perception/object_detection_2d/gem)
- - [retinaface Demo](/projects/perception/object_detection_2d/retinaface)
- - [centernet Demo](/projects/perception/object_detection_2d/centernet)
- - [ssd Demo](/projects/perception/object_detection_2d/ssd)
- - [yolov3 Demo](/projects/perception/object_detection_2d/yolov3)
- - [seq2seq-nms Demo](/projects/perception/object_detection_2d/nms/seq2seq-nms)
+ - [nanodet Demo](/projects/python/perception/object_detection_2d/nanodet)
+ - [detr Demo](/projects/python/perception/object_detection_2d/detr)
+ - [gem Demo](/projects/python/perception/object_detection_2d/gem)
+ - [retinaface Demo](/projects/python/perception/object_detection_2d/retinaface)
+ - [centernet Demo](/projects/python/perception/object_detection_2d/centernet)
+ - [ssd Demo](/projects/python/perception/object_detection_2d/ssd)
+ - [yolov3 Demo](/projects/python/perception/object_detection_2d/yolov3)
+ [yolov5 Demo](/projects/python/perception/object_detection_2d/yolov5)
+ - [seq2seq-nms Demo](/projects/python/perception/object_detection_2d/nms/seq2seq-nms)
- object detection 3d:
- - [voxel Demo](/projects/perception/object_detection_3d/demos/voxel_object_detection_3d)
+ - [voxel Demo](/projects/python/perception/object_detection_3d/demos/voxel_object_detection_3d)
- object tracking 2d:
- - [fair_mot Demo](/projects/perception/object_tracking_2d/demos/fair_mot_deep_sort)
+ - [fair_mot Demo](/projects/python/perception/object_tracking_2d/demos/fair_mot_deep_sort)
+ - [siamrpn Demo](/projects/python/perception/object_tracking_2d/demos/siamrpn)
- panoptic segmentation:
- - [efficient_ps Demo](/projects/perception/panoptic_segmentation/efficient_ps)
+ - [efficient_ps Demo](/projects/python/perception/panoptic_segmentation/efficient_ps)
- semantic segmentation:
- - [bisnet Demo](/projects/perception/semantic_segmentation/bisenet)
+ - [bisnet Demo](/projects/python/perception/semantic_segmentation/bisenet)
- action recognition:
- - [skeleton_based_action_recognition Demo](/projects/perception/skeleton_based_action_recognition)
+ - [skeleton_based_action_recognition Demo](/projects/python/perception/skeleton_based_action_recognition)
- fall detection:
- - [fall_detection Demo](/projects/perception/fall_detection.md)
- - [full_map_posterior_slam Module](/projects/perception/slam/full_map_posterior_gmapping)
+ - [fall_detection Demo](/projects/python/perception/fall_detection.md)
+ - [full_map_posterior_slam Module](/projects/python/perception/slam/full_map_posterior_gmapping)
- `simulation` Module
- - [SMPL+D Human Models Dataset](/projects/simulation/SMPL%2BD_human_models)
- - [Human-Data-Generation-Framework](/projects/simulation/human_dataset_generation)
- - [Human Model Generation Demos](/projects/simulation/human_dataset_generation)
+ - [SMPL+D Human Models Dataset](/projects/python/simulation/SMPL%2BD_human_models)
+ - [Human-Data-Generation-Framework](/projects/python/simulation/human_dataset_generation)
+ - [Human Model Generation Demos](/projects/python/simulation/human_dataset_generation)
- `utils` Module
- - [Hyperparameter Tuning Module](/projects/utils/hyperparameter_tuner)
+ - [Hyperparameter Tuning Module](/projects/python/utils/hyperparameter_tuner)
+- [Known Issues](/docs/reference/issues.md)
diff --git a/docs/reference/installation.md b/docs/reference/installation.md
index 1eb9042ee3..12383747a6 100644
--- a/docs/reference/installation.md
+++ b/docs/reference/installation.md
@@ -1,68 +1,29 @@
# Installing OpenDR toolkit
OpenDR can be installed in the following ways:
-1. By cloning this repository (CPU/GPU support)
-2. Using *pip* (CPU/GPU support)
-3. Using *docker* (CPU/GPU support)
+1. Using *pip* (CPU/GPU support)
+2. Using *docker* (CPU/GPU support)
+3. By cloning this repository (CPU/GPU support, for advanced users only)
The following table summarizes the installation options based on your system architecture and OS:
-| Installation Method | CPU/GPU | OS |
-|---------------------|----------|-----------------------|
-| Clone & Install | Both | Ubuntu 20.04 (x86-64) |
-| pip | Both | Ubuntu 20.04 (x86-64) |
-| docker | Both | Linux / Windows |
+| Installation Method | OS |
+|-----------------------|-----------------------|
+| Clone & Install | Ubuntu 20.04 (x86-64) |
+| pip | Ubuntu 20.04 (x86-64) |
+| docker | Linux / Windows |
+Note that pip installation includes only the Python API of the toolkit.
+If you need to use all the functionalities of the toolkit (e.g., ROS nodes, etc.), then you need either to use the pre-compiled docker images or to follow the installation instructions for cloning and building the toolkit.
-# Installing by cloning OpenDR repository (Ubuntu 20.04, x86, architecture)
-
-This is the recommended way of installing the whole toolkit, since it allows for fully exploiting all the provided functionalities.
-To install the toolkit, please first make sure that you have `git` available on your system.
-```bash
-sudo apt install git
-```
-Then, clone the toolkit:
-```bash
-git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
-```
-You are then ready to install the toolkit:
-```bash
-cd opendr
-./bin/install.sh
-```
-The installation script automatically installs all the required dependencies.
-Note that this might take a while (~10-20min depending on your machine and network connection), while the script also makes system-wide changes.
-Using dockerfiles is strongly advised (please see below), unless you know what you are doing.
-Please also make sure that you have enough RAM available for the installation (about 4GB of free RAM is needed for the full installation/compilation).
-
-
-If you want to install GPU-related dependencies, then you can appropriately set the `OPENDR_DEVICE` variable.
-The toolkit defaults to using CPU.
-Therefore, if you want to use GPU, please set this variable accordingly *before* running the installation script:
-```bash
-export OPENDR_DEVICE=gpu
-```
-The installation script creates a *virtualenv*, where the toolkit is installed.
-To activate OpenDR environment you can just source the `activate.sh`:
-```bash
-source ./bin/activate.sh
-```
-Then, you are ready to use the toolkit!
-
-**NOTE:** `OPENDR_DEVICE` does not alter the inference/training device at *runtime*.
-It only affects the dependency installation.
-You can use OpenDR API to change the inference device.
-
-You can also verify the installation by using the supplied Python and C unit tests:
+The toolkit is developed and tested on *Ubuntu 20.04 (x86-64)*.
+Please make sure that you have the most recent version of all tools by running
```bash
-make unittest
-make ctests
+sudo apt upgrade
```
-
-If you plan to use GPU-enabled functionalities, then you are advised to install [CUDA 11.2](https://developer.nvidia.com/cuda-11.2.0-download-archive), along with [CuDNN](https://developer.nvidia.com/cudnn).
-
-**HINT:** All tests probe for the `TEST_DEVICE` enviromental variable when running.
-If this enviromental variable is set during testing, it allows for easily running all tests on a different device (e.g., setting `TEST_DEVICE=cuda:0` runs all tests on the first GPU of the system).
+before installing the toolkit and then follow the installation instructions in the relevant section.
+All the required dependencies will be automatically installed (or explicit instructions are provided).
+Other platforms apart from Ubuntu 20.04, e.g., Windows, other Linux distributions, etc., are currently supported through docker images.
# Installing using *pip*
@@ -71,7 +32,7 @@ If this enviromental variable is set during testing, it allows for easily runnin
You can directly install the Python API of the OpenDR toolkit using pip.
First, install the required dependencies:
```bash
-sudo apt install python3.8-venv libfreetype6-dev git build-essential cmake python3-dev wget libopenblas-dev libsndfile1 libboost-dev libeigen3-dev
+sudo apt install python3.8-venv libfreetype6-dev git build-essential cmake python3-dev wget libopenblas-dev libsndfile1 libboost-dev libeigen3-dev
python3 -m venv venv
source venv/bin/activate
pip install wheel
@@ -88,12 +49,12 @@ If you have a CPU that does not support AVX2, the please also `export DISABLE_BC
This is not needed for newer CPUs.
## Enabling GPU-acceleration
-The same OpenDR package is used for both CPU and GPU systems.
+The same OpenDR package is used for both CPU and GPU systems.
However, you need to have the appropriate GPU-enabled dependencies installed to use a GPU with OpenDR.
If you plan to use GPU, then you should first install [mxnet-cuda](https://mxnet.apache.org/versions/1.4.1/install/index.html?platform=Linux&language=Python&processor=CPU) and [detectron2](https://detectron2.readthedocs.io/en/latest/tutorials/install.html).
For example, if you stick with the default PyTorch version (1.8) and use CUDA11.2, then you can simply follow:
```bash
-sudo apt install python3.8-venv libfreetype6-dev git build-essential cmake python3-dev wget libopenblas-dev libsndfile1 libboost-dev libeigen3-dev
+sudo apt install python3.8-venv libfreetype6-dev git build-essential cmake python3-dev wget libopenblas-dev libsndfile1 libboost-dev libeigen3-dev
python3 -m venv venv
source venv/bin/activate
pip install wheel
@@ -116,30 +77,29 @@ For example, if you just want to perform pose estimation you can just run:
pip install opendr-toolkit-engine
pip install opendr-toolkit-pose-estimation
```
-Note that `opendr-toolkit-engine` must be always installed in your system, while multiple tools can be installed in this way.
+Note that `opendr-toolkit-engine` must be always installed in your system, while multiple tools can be installed in this way.
OpenDR distributes the following packages that can be installed:
-- *opendr-toolkit-activity_recognition*
-- *opendr-toolkit-speech_recognition*
-- *opendr-toolkit-semantic_segmentation*
-- *opendr-toolkit-skeleton_based_action_recognition*
-- *opendr-toolkit-face_recognition*
-- *opendr-toolkit-facial_expression_recognition*
-- *opendr-toolkit-panoptic_segmentation*
-- *opendr-toolkit-pose_estimation*
-- *opendr-toolkit-compressive_learning*
-- *opendr-toolkit-hyperparameter_tuner*
-- *opendr-toolkit-heart_anomaly_detection*
-- *opendr-toolkit-human_model_generation*
-- *opendr-toolkit-multimodal_human_centric*
-- *opendr-toolkit-object_detection_2d*
-- *opendr-toolkit-object_tracking_2d*
-- *opendr-toolkit-object_detection_3d*
-- *opendr-toolkit-object_tracking_3d*
-- *opendr-toolkit-mobile_manipulation* (requires a functional ROS installation)
-- *opendr-toolkit-single_demo_grasp* (requires a functional ROS installation)
-
-
-Note that `opendr-toolkit` is actually just a metapackage that includes all the afformentioned packages.
+- *opendr-toolkit-activity-recognition*
+- *opendr-toolkit-speech-recognition*
+- *opendr-toolkit-semantic-segmentation*
+- *opendr-toolkit-skeleton-based-action-recognition*
+- *opendr-toolkit-face-recognition*
+- *opendr-toolkit-facial-expression-recognition*
+- *opendr-toolkit-panoptic-segmentation*
+- *opendr-toolkit-pose-estimation*
+- *opendr-toolkit-compressive-learning*
+- *opendr-toolkit-hyperparameter-tuner*
+- *opendr-toolkit-heart-anomaly-detection*
+- *opendr-toolkit-human-model-generation*
+- *opendr-toolkit-multimodal-human-centric*
+- *opendr-toolkit-object-detection-2d*
+- *opendr-toolkit-object-tracking-2d*
+- *opendr-toolkit-object-detection-3d*
+- *opendr-toolkit-object-tracking-3d*
+- *opendr-toolkit-ambiguity-measure*
+- *opendr-toolkit-fall-detection*
+
+Note that `opendr-toolkit` is actually just a metapackage that includes all the aformentioned packages.
# Installing using *docker*
@@ -162,34 +122,86 @@ source bin/activate.sh
If you want to display GTK-based applications from the Docker container (e.g., visualize results using OpenCV `imshow()`), then you should mount the X server socket inside the container, e.g.,
```bash
xhost +local:root
-sudo docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY opendr/opendr-toolkit:cpu_v1.1.1 /bin/bash
+sudo docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY opendr/opendr-toolkit:cpu_v2.0.0 /bin/bash
```
## GPU docker
If you want to use a CUDA-enabled container please install [nvidia-docker](https://github.com/NVIDIA/nvidia-docker).
Then, you can directly run the latest image with the command:
```bash
-sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda_v1.1.1
+sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda_v2.0.0
```
or, for an interactive session:
```bash
-sudo docker run --gpus all -it opendr/opendr-toolkit:cuda_v1.1.1 /bin/bash
+sudo docker run --gpus all -it opendr/opendr-toolkit:cuda_v2.0.0 /bin/bash
```
In this case, do not forget to enable the virtual environment with:
```bash
source bin/activate.sh
```
-## Build the docker images yourself _(optional)_
-Alternatively you can also build the docker images locally using the [Dockerfile](/Dockerfile) ([Dockerfile-cuda](/Dockerfile-cuda) for cuda) provided in the root folder of the toolkit.
-For the CPU image, execute the following commands:
+# Installing by cloning OpenDR repository (Ubuntu 20.04, x86, architecture)
+
+This is the recommended way of installing the whole toolkit, since it allows for fully exploiting all the provided functionalities.
+To install the toolkit, please first make sure that you have `git` available on your system.
+```bash
+sudo apt install git
+```
+Then, clone the toolkit:
```bash
git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
+```
+
+If you want to install GPU-related dependencies, then you can appropriately set the `OPENDR_DEVICE` variable.
+The toolkit defaults to using CPU.
+Therefore, if you want to use GPU, please set this variable accordingly *before* running the installation script:
+```bash
+export OPENDR_DEVICE=gpu
+```
+
+If you want to use ROS or ROS2, then you need to set the `ROS_DISTRO` variable *before* running the installation script so that additional required dependencies are correctly installed.
+This variable should be set to either `noetic` or `melodic` for ROS, and `foxy` or `humble` for ROS2.
+
+You are then ready to install the toolkit:
+```bash
cd opendr
-sudo docker build -t opendr/opendr-toolkit:cpu .
+./bin/install.sh
```
+The installation script automatically installs all the required dependencies.
+Note that this might take a while (~10-20min depending on your machine and network connection), while the script also makes system-wide changes.
+Using dockerfiles is strongly advised (please see below), unless you know what you are doing.
+Please also make sure that you have enough RAM available for the installation (about 4GB of free RAM is needed for the full installation/compilation).
+
-For the cuda-enabled image, first edit `/etc/docker/daemon.json` in order to set the default docker runtime:
+The installation script creates a *virtualenv*, where the toolkit is installed.
+To activate OpenDR environment you can just source the `activate.sh`:
+```bash
+source ./bin/activate.sh
+```
+Then, you are ready to use the toolkit!
+
+**NOTE:** `OPENDR_DEVICE` does not alter the inference/training device at *runtime*.
+It only affects the dependency installation.
+You can use OpenDR API to change the inference device.
+
+You can also verify the installation by using the supplied Python and C unit tests:
+```bash
+make unittest
+make ctests
+```
+
+If you plan to use GPU-enabled functionalities, then you are advised to install [CUDA 11.2](https://developer.nvidia.com/cuda-11.2.0-download-archive), along with [CuDNN](https://developer.nvidia.com/cudnn).
+
+**HINT:** All tests probe for the `TEST_DEVICE` enviromental variable when running.
+If this enviromental variable is set during testing, it allows for easily running all tests on a different device (e.g., setting `TEST_DEVICE=cuda:0` runs all tests on the first GPU of the system).
+
+
+## Nvidia embedded devices docker
+You can also run the corresponding docker image on an Nvidia embedded device (supported: TX-2, Xavier-NX and AGX):
+
+Note that the embedded device should be flashed with Jetpack 4.6.
+
+To enable GPU usage on the embedded device within docker, first edit `/etc/docker/daemon.json` in order to set the default docker runtime:
```
{
"runtimes": {
@@ -206,18 +218,35 @@ Restart docker afterwards:
```
sudo systemctl restart docker.service
```
-Then you can build the supplied dockerfile:
+
+
+You can directly run the corresponding docker image by running one of the below:
```bash
-git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
-cd opendr
-sudo docker build -t opendr/opendr-toolkit:cuda -f Dockerfile-cuda .
+sudo docker run -it opendr/opendr-toolkit:tx2_v2 /bin/bash
+sudo docker run -it opendr/opendr-toolkit:nx_v2 /bin/bash
+sudo docker run -it opendr/opendr-toolkit:agx_v2 /bin/bash
```
+This will give you access to a bash terminal within the docker.
-In order to run them, the commands are respectively:
+After that you should enable the environment variables inside the docker with:
```bash
-sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cpu
+cd opendr
+source bin/activate_nvidia.sh
+source /opt/ros/noetic/setup.bash
+source projects/opendr_ws/devel/setup.bash
```
-and
+
+The embedded devices docker comes preinstalled with the OpenDR toolkit.
+It supports all tools under perception package, as well as all corresponding ROS nodes.
+
+You can enable a USB camera, given it is mounted as `/dev/video0`, by running the container with the following arguments:
+```
+xhost +local:root
+sudo docker run -it --privileged -v /dev/video0:/dev/video0 opendr/opendr-toolkit:nx_v2 /bin/bash
```
-sudo docker run --gpus all -p 8888:8888 opendr/opendr-toolkit:cuda
+
+To use the docker on an embedded device with a monitor and a usb camera attached, as well as network access through the hosts network settings you can run:
```
+xhost +local:root
+sudo docker run -it --privileged --network host -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DSIPLAY -v /dev/video0:/dev/video0 opendr/opendr-toolkit:nx_v2 /bin/bash
+```
\ No newline at end of file
diff --git a/docs/reference/issues.md b/docs/reference/issues.md
new file mode 100644
index 0000000000..fa4fd1fc89
--- /dev/null
+++ b/docs/reference/issues.md
@@ -0,0 +1,39 @@
+# Known Issues
+
+This page includes known issues, compatibility issues as well as possible workarounds.
+
+
+## Issue: Some ROS nodes have a noticable lag
+
+You should make sure that queue size is set to 1 and the buffer size is large enough to hold the input message.
+Even though we have set the appropriate default values for topics in order to avoid this issue, this also depends on your system configuration (e.g., size messages published in input topics).
+Be sure to check the discussion and explanation of this behavior in [#275](https://github.com/opendr-eu/opendr/issues/275).
+Essentially, due to the way ROS handles message a latency of at least 2 frames is expected.
+
+
+## Issue: Docker image do not fit my embedded device
+
+This can affect several embedded devices, such as NX and TX2, which have limited storage on board.
+The easiest solution to this issue is to use external storage (e.g., an SD card or an external SSD).
+You can also check the [customization](develop/docs/reference/customize.md) instructions on how you can manually build a docker image that can fit your device.
+
+## Issue: I am trying to install the toolkit on Ubuntu 18.04/20.10/XX.XX, WLS, or any other linux distribution and it doesn't work.
+
+OpenDR toolkit targets native installation on Ubuntu 20.04.
+For any other system you are advised to use the docker images that are expected to work out-of-the-box on any configuration and operating system.
+
+
+## Issue: I cannot install the tookit using `pip` \ I cannot install the toolkit on colab
+
+OpenDR toolkit is officially targeting Ubuntu 20.04.
+For other systems, slight modifications might be needed in order to ensure that all dependencies are in place.
+Most parts of the toolkit will be probably installed without any issue on colab or any other Ubuntu-like system.
+However, the behavior of `pip`'s dependency solver might cause issues (e.g., endless loops when trying to solve dependencies).
+In this case, it is suggested to remove any package that could cause any conflict, e.g.:
+```
+pip uninstall -y torch torchaudio fastai torchvision torchtext torchsummary kapre google-cloud-bigquery-storage yellowbrick tensorflow-metadata tensorflow-datasets numba imbalanced-learn googleapis-common-protos google-api-core imageio tensorboard
+```
+and then install the toolkit using the `--use-deprecated=legacy-resolver` flag, e.g.:
+```
+DISABLE_BCOLZ_AVX2=true pip install opendr-toolkit --use-deprecated=legacy-resolver
+```
diff --git a/docs/reference/mobile-manipulation.md b/docs/reference/mobile-manipulation.md
index b40fe513e1..294cccdcd1 100644
--- a/docs/reference/mobile-manipulation.md
+++ b/docs/reference/mobile-manipulation.md
@@ -130,7 +130,7 @@ The dependencies for this module automatically set up and compile a catkin works
To start required ROS nodes, please run the following before using the `MobileRLLearner` class:
```sh
-source ${OPENDR_HOME}/projects/control/mobile_manipulation/mobile_manipulation_ws/devel/setup.bash
+source ${OPENDR_HOME}/projects/python/control/mobile_manipulation/mobile_manipulation_ws/devel/setup.bash
roslaunch mobile_manipulation_rl [pr2,tiago]_analytical.launch
````
@@ -265,7 +265,7 @@ As this achieves very high control frequencies, we do not expect any benefits th
TABLE-1: Control frequency in Hertz.
| Model | AMD Ryzen 9 5900X (Hz) |
-| -------- | ---------------------- |
+| -------- | ---------------------- |
| MobileRL | 2200 |
@@ -294,7 +294,7 @@ TABLE-3: Platform compatibility evaluation.
#### Notes
##### HSR
-The HSR environment relies on packages that are part of the proprietory HSR simulator.
+The HSR environment relies on packages that are part of the proprietary HSR simulator.
If you have an HSR account with Toyota, please follow these steps to use the environment.
Otherwise ignore this section to use the other environments we provide.
@@ -307,7 +307,7 @@ Otherwise ignore this section to use the other environments we provide.
and add them to `pybind_add_module()` and `target_link_libraries()` two lines below that.
- Comment in the hsr parts in `src/pybindings` and the import of HSREnv in `mobileRL/envs/robotenv.py` to create the python bindings
-- Some HSR launchfiles are not opensource either and might need some small adjustments
+- Some HSR launchfiles are not open source either and might need some small adjustments
#### References
[1] Learning Kinematic Feasibility for Mobile Manipulation through Deep Reinforcement Learning,
diff --git a/docs/reference/nanodet.md b/docs/reference/nanodet.md
new file mode 100644
index 0000000000..765f210673
--- /dev/null
+++ b/docs/reference/nanodet.md
@@ -0,0 +1,289 @@
+## nanodet module
+
+The *nanodet* module contains the *NanodetLearner* class, which inherits from the abstract class *Learner*.
+
+### Class NanodetLearner
+Bases: `engine.learners.Learner`
+
+The *NanodetLearner* class is a wrapper of the Nanodet object detection algorithms based on the original
+[Nanodet implementation](https://github.com/RangiLyu/nanodet).
+It can be used to perform object detection on images (inference) and train All predefined Nanodet object detection models and new modular models from the user.
+
+The [NanodetLearner](../../src/opendr/perception/object_detection_2d/nanodet/nanodet_learner.py) class has the
+following public methods:
+
+#### `NanodetLearner` constructor
+```python
+NanodetLearner(self, model_to_use, iters, lr, batch_size, checkpoint_after_iter, checkpoint_load_iter, temp_path, device,
+ weight_decay, warmup_steps, warmup_ratio, lr_schedule_T_max, lr_schedule_eta_min, grad_clip)
+```
+
+Constructor parameters:
+
+- **model_to_use**: *{"EfficientNet_Lite0_320", "EfficientNet_Lite1_416", "EfficientNet_Lite2_512", "RepVGG_A0_416",
+ "t", "g", "m", "m_416", "m_0.5x", "m_1.5x", "m_1.5x_416", "plus_m_320", "plus_m_1.5x_320", "plus_m_416",
+ "plus_m_1.5x_416", "custom"}, default=plus_m_1.5x_416*\
+ Specifies the model to use and the config file that contains all hyperparameters for training, evaluation and inference as the original
+ [Nanodet implementation](https://github.com/RangiLyu/nanodet). If you want to overwrite some of the parameters you can
+ put them as parameters in the learner.
+- **iters**: *int, default=None*\
+ Specifies the number of epochs the training should run for.
+- **lr**: *float, default=None*\
+ Specifies the initial learning rate to be used during training.
+- **batch_size**: *int, default=None*\
+ Specifies number of images to be bundled up in a batch during training.
+ This heavily affects memory usage, adjust according to your system.
+- **checkpoint_after_iter**: *int, default=None*\
+ Specifies per how many training iterations a checkpoint should be saved.
+ If it is set to 0 no checkpoints will be saved.
+- **checkpoint_load_iter**: *int, default=None*\
+ Specifies which checkpoint should be loaded.
+ If it is set to 0, no checkpoints will be loaded.
+- **temp_path**: *str, default=''*\
+ Specifies a path where the algorithm looks for saving the checkpoints along with the logging files. If *''* the `cfg.save_dir` will be used instead.
+- **device**: *{'cpu', 'cuda'}, default='cuda'*\
+ Specifies the device to be used.
+- **weight_decay**: *float, default=None*\
+- **warmup_steps**: *int, default=None*\
+- **warmup_ratio**: *float, default=None*\
+- **lr_schedule_T_max**: *int, default=None*\
+- **lr_schedule_eta_min**: *float, default=None*\
+- **grad_clip**: *int, default=None*\
+
+#### `NanodetLearner.fit`
+```python
+NanodetLearner.fit(self, dataset, val_dataset, logging_path, verbose, seed)
+```
+
+This method is used for training the algorithm on a train dataset and validating on a val dataset.
+
+Parameters:
+
+- **dataset**: *ExternalDataset*\
+ Object that holds the training dataset.
+ Can be of type `ExternalDataset`.
+- **val_dataset** : *ExternalDataset, default=None*\
+ Object that holds the validation dataset.
+ Can be of type `ExternalDataset`.
+- **logging_path** : *str, default=''*\
+ Subdirectory in temp_path to save log files and TensorBoard.
+- **verbose** : *bool, default=True*\
+ Enables the maximum verbosity and the logger.
+- **seed** : *int, default=123*\
+ Seed for repeatability.
+
+#### `NanodetLearner.eval`
+```python
+NanodetLearner.eval(self, dataset, verbose)
+```
+
+This method is used to evaluate a trained model on an evaluation dataset.
+Saves a txt logger file containing stats regarding evaluation.
+
+Parameters:
+
+- **dataset** : *ExternalDataset*\
+ Object that holds the evaluation dataset.
+- **verbose**: *bool, default=True*\
+ Enables the maximum verbosity and logger.
+
+#### `NanodetLearner.infer`
+```python
+NanodetLearner.infer(self, input, thershold, verbose)
+```
+
+This method is used to perform object detection on an image.
+Returns an `engine.target.BoundingBoxList` object, which contains bounding boxes that are described by the left-top corner and
+its width and height, or returns an empty list if no detections were made of the image in input.
+
+Parameters:
+- **input** : *Image*\
+ Image type object to perform inference on it.
+ - **threshold**: *float, default=0.35*\
+ Specifies the threshold for object detection inference.
+ An object is detected if the confidence of the output is higher than the specified threshold.
+- **verbose**: *bool, default=True*\
+ Enables the maximum verbosity and logger.
+
+#### `NanodetLearner.save`
+```python
+NanodetLearner.save(self, path, verbose)
+```
+
+This method is used to save a trained model with its metadata.
+Provided with the path, it creates the "path" directory, if it does not already exist.
+Inside this folder, the model is saved as *"nanodet_{model_name}.pth"* and a metadata file *"nanodet_{model_name}.json"*.
+If the directory already exists, the *"nanodet_{model_name}.pth"* and *"nanodet_{model_name}.json"* files are overwritten.
+
+Parameters:
+
+- **path**: *str, default=None*\
+ Path to save the model, if None it will be the `"temp_folder"` or the `"cfg.save_dir"` from learner.
+- **verbose**: *bool, default=True*\
+ Enables the maximum verbosity and logger.
+
+#### `NanodetLearner.load`
+```python
+NanodetLearner.load(self, path, verbose)
+```
+
+This method is used to load a previously saved model from its saved folder.
+Loads the model from inside the directory of the path provided, using the metadata .json file included.
+
+Parameters:
+
+- **path**: *str, default=None*\
+ Path of the model to be loaded.
+- **verbose**: *bool, default=True*\
+ Enables the maximum verbosity and logger.
+
+#### `NanodetLearner.download`
+```python
+NanodetLearner.download(self, path, mode, model, verbose, url)
+```
+
+Downloads data needed for the various functions of the learner, e.g., pretrained models as well as test data.
+
+Parameters:
+
+- **path**: *str, default=None*\
+ Specifies the folder where data will be downloaded. If *None*, the *self.temp_path* directory is used instead.
+- **mode**: *{'pretrained', 'images', 'test_data'}, default='pretrained'*\
+ If *'pretrained'*, downloads a pretrained detector model from the *model_to_use* architecture which was chosen at learner initialization.
+ If *'images'*, downloads an image to perform inference on. If *'test_data'* downloads a dummy dataset for testing purposes.
+- **verbose**: *bool, default=False*\
+ Enables the maximum verbosity and logger.
+- **url**: *str, default=OpenDR FTP URL*\
+ URL of the FTP server.
+
+
+#### Tutorials and Demos
+
+A tutorial on performing inference is available.
+Furthermore, demos on performing [training](../../projects/perception/object_detection_2d/nanodet/train_demo.py),
+[evaluation](../../projects/perception/object_detection_2d/nanodet/eval_demo.py) and
+[inference](../../projects/perception/object_detection_2d/nanodet/inference_demo.py) are also available.
+
+
+
+#### Examples
+
+* **Training example using an `ExternalDataset`.**
+
+ To train properly, the architecture weights must be downloaded in a predefined directory before fit is called, in this case the directory name is "predefined_examples".
+ Default architecture is *'plus-m-1.5x_416'*.
+ The training and evaluation dataset root should be present in the path provided, along with the annotation files.
+ The default COCO 2017 training data can be found [here](https://cocodataset.org/#download) (train, val, annotations).
+ All training parameters (optimizer, lr schedule, losses, model parameters etc.) can be changed in the model config file
+ in [config directori](../../src/opendr/perception/object_detection_2d/nanodet/algorithm/config).
+ You can find more informations in [config file detail](../../src/opendr/perception/object_detection_2d/nanodet/algorithm/config/config_file_detail.md).
+ For easier use, with NanodetLearner parameters user can overwrite the following parameters:
+ (iters, lr, batch_size, checkpoint_after_iter, checkpoint_load_iter, temp_path, device, weight_decay, warmup_steps,
+ warmup_ratio, lr_schedule_T_max, lr_schedule_eta_min, grad_clip)
+
+ **Note**
+
+ The Nanodet tool can be used with any PASCAL VOC or COCO like dataset. The only thing is needed is to provide the correct root and dataset type.
+
+ If *'voc'* is choosed for *dataset* the directory must look like this:
+
+ - root folder
+ - train
+ - Annotations
+ - image1.xml
+ - image2.xml
+ - ...
+ - JPEGImages
+ - image1.jpg
+ - image2.jpg
+ - ...
+ - val
+ - Annotations
+ - image1.xml
+ - image2.xml
+ - ...
+ - JPEGImages
+ - image1.jpg
+ - image2.jpg
+ - ...
+
+ On the other hand if *'coco'* is choosed for *dataset* the directory must look like this:
+
+ - root folder
+ - train2017
+ - image1.jpg
+ - image2.jpg
+ - ...
+ - val2017
+ - image1.jpg
+ - image2.jpg
+ - ...
+ - annotations
+ - instances_train2017.json
+ - instances_val2017.json
+
+ You can change the default annotation and image directories in [dataset](../../src/opendr/perception/object_detection_2d/nanodet/algorithm/nanodet/data/dataset/__init__.py)
+
+ ```python
+ import argparse
+
+ from opendr.engine.datasets import ExternalDataset
+ from opendr.perception.object_detection_2d import NanodetLearner
+
+
+ if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--dataset", help="Dataset to train on", type=str, default="coco", choices=["voc", "coco"])
+ parser.add_argument("--data-root", help="Dataset root folder", type=str)
+ parser.add_argument("--model", help="Model that config file will be used", type=str)
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--batch-size", help="Batch size to use for training", type=int, default=6)
+ parser.add_argument("--lr", help="Learning rate to use for training", type=float, default=5e-4)
+ parser.add_argument("--checkpoint-freq", help="Frequency in-between checkpoint saving and evaluations", type=int, default=50)
+ parser.add_argument("--n-epochs", help="Number of total epochs", type=int, default=300)
+ parser.add_argument("--resume-from", help="Epoch to load checkpoint file and resume training from", type=int, default=0)
+
+ args = parser.parse_args()
+
+ if args.dataset == 'voc':
+ dataset = ExternalDataset(args.data_root, 'voc')
+ val_dataset = ExternalDataset(args.data_root, 'voc')
+ elif args.dataset == 'coco':
+ dataset = ExternalDataset(args.data_root, 'coco')
+ val_dataset = ExternalDataset(args.data_root, 'coco')
+
+ nanodet = NanodetLearner(model_to_use=args.model, iters=args.n_epochs, lr=args.lr, batch_size=args.batch_size,
+ checkpoint_after_iter=args.checkpoint_freq, checkpoint_load_iter=args.resume_from,
+ device=args.device)
+
+ nanodet.download("./predefined_examples", mode="pretrained")
+ nanodet.load("./predefined_examples/nanodet-{}/nanodet-{}.ckpt".format(args.model, args.model), verbose=True)
+ nanodet.fit(dataset, val_dataset)
+ nanodet.save()
+ ```
+
+* **Inference and result drawing example on a test image.**
+
+ This example shows how to perform inference on an image and draw the resulting bounding boxes using a nanodet model that is pretrained on the COCO dataset.
+ Moreover, inference can be used in all images in a folder, frames of a video or a webcam feedback with the provided *mode*.
+ In this example first is downloaded a pre-trained model as in training example and then an image to be inference.
+ With the same *path* parameter you can choose a folder or a video file to be used as inference. Last but not least, if 'webcam' is
+ used in *mode* the *camid* parameter of inference must be used to determine the webcam device in your machine.
+
+ ```python
+ import argparse
+ from opendr.perception.object_detection_2d import NanodetLearner
+
+ if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model that config file will be used", type=str)
+ args = parser.parse_args()
+
+ nanodet = NanodetLearner(model_to_use=args.model, device=args.device)
+
+ nanodet.download("./predefined_examples", mode="pretrained")
+ nanodet.load("./predefined_examples/nanodet-{}/nanodet-{}.ckpt".format(args.model, args.model), verbose=True)
+ nanodet.download("./predefined_examples", mode="images")
+ boxes = nanodet.infer(path="./predefined_examples/000000000036.jpg")
+ ```
\ No newline at end of file
diff --git a/docs/reference/object-detection-2d-nms-seq2seq_nms.md b/docs/reference/object-detection-2d-nms-seq2seq_nms.md
index 513233c833..c1269c108f 100644
--- a/docs/reference/object-detection-2d-nms-seq2seq_nms.md
+++ b/docs/reference/object-detection-2d-nms-seq2seq_nms.md
@@ -262,7 +262,7 @@ Parameters:
ssd = SingleShotDetectorLearner(device='cuda')
ssd.download(".", mode="pretrained")
ssd.load("./ssd_default_person", verbose=True)
- img = Image.open(OPENDR_HOME + '/projects/perception/object_detection_2d/nms/img_temp/frame_0000.jpg')
+ img = Image.open(OPENDR_HOME + '/projects/python/perception/object_detection_2d/nms/img_temp/frame_0000.jpg')
if not isinstance(img, Image):
img = Image(img)
boxes = ssd.infer(img, threshold=0.25, custom_nms=seq2SeqNMSLearner)
diff --git a/docs/reference/object-detection-2d-yolov5.md b/docs/reference/object-detection-2d-yolov5.md
new file mode 100644
index 0000000000..58420328eb
--- /dev/null
+++ b/docs/reference/object-detection-2d-yolov5.md
@@ -0,0 +1,81 @@
+## YOLOv5DetectorLearner module
+
+The *yolov5* module contains the *YOLOv5DetectorLearner* class, which inherits from the abstract class *Learner*.
+
+### Class YOLOv5DetectorLearner
+Bases: `engine.learners.Learner`
+
+The *YOLOv5DetectorLearner* class is a wrapper of the YOLO detector[[1]](#yolo-1)
+[Ultralytics implementation](https://github.com/ultralytics/yolov5) based on its availability in the [Pytorch Hub](https://pytorch.org/hub/ultralytics_yolov5/).
+It can be used to perform object detection on images (inference only).
+
+The [YOLOv5DetectorLearner](/src/opendr/perception/object_detection_2d/yolov5/yolov5_learner.py) class has the following
+public methods:
+
+#### `YOLOv5DetectorLearner` constructor
+```python
+YOLOv5DetectorLearner(self, model_name, path, device)
+```
+
+Constructor parameters:
+
+- **model_name**: *str*\
+ Specifies the name of the model to be used. Available models:
+ - 'yolov5n' (46.0% mAP, 1.9M parameters)
+ - 'yolov5s' (56.0% mAP, 7.2M parameters)
+ - 'yolov5m' (63.9% mAP, 21.2M parameters)
+ - 'yolov5l' (67.2% mAP, 46.5M parameters)
+ - 'yolov5x' (68.9% mAP, 86.7M parameters)
+ - 'yolov5n6' (50.7% mAP, 3.2M parameters)
+ - 'yolov5s6' (63.0% mAP, 16.8M parameters)
+ - 'yolov5m6' (69.0% mAP, 35.7 parameters)
+ - 'yolov5l6' (71.6% mAP, 76.8M parameters)
+ - 'custom' (for custom models, the ```path``` parameter must be set to point to the location of the weights file.)
+Note that mAP (0.5) is reported on the [COCO val2017 dataset](https://github.com/ultralytics/yolov5/releases).
+- **path**: *str, default=None*\
+ For custom-trained models, specifies the path to the weights to be loaded.
+- **device**: *{'cuda', 'cpu'}, default='cuda'*
+ Specifies the device used for inference.
+- **temp_path**: *str, default='.'*\
+ Specifies the path to where the weights will be downloaded when using pretrained models.
+- **force_reload**: *bool, default=False*\
+ Sets the `force_reload` parameter of the pytorch hub `load` method.
+ This fixes issues with caching when set to `True`.
+
+
+#### `YOLOv5DetectorLearner.infer`
+The `infer` method:
+```python
+YOLOv5DetectorLearner.infer(self, img)
+```
+
+Performs inference on a single image.
+
+Parameters:
+
+- **img**: *object*\
+ Object of type engine.data.Image or OpenCV.
+- **size**: *int, default=640*\
+ Size of image for inference.
+ The image is resized to this in both sides before being fed to the model.
+
+#### Examples
+
+* Inference and result drawing example on a test .jpg image using OpenCV:
+ ```python
+ import torch
+ from opendr.engine.data import Image
+ from opendr.perception.object_detection_2d import YOLOv5DetectorLearner
+ from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+ yolo = YOLOv5DetectorLearner(model_name='yolov5s', device='cpu')
+
+ torch.hub.download_url_to_file('https://ultralytics.com/images/zidane.jpg', 'zidane.jpg') # download image
+ im1 = Image.open('zidane.jpg') # OpenDR image
+
+ results = yolo.infer(im1)
+ draw_bounding_boxes(im1.opencv(), results, yolo.classes, show=True, line_thickness=3)
+ ```
+
+#### References
+[1] YOLOv5: The friendliest AI architecture you'll ever use.
diff --git a/docs/reference/object-tracking-2d-siamrpn.md b/docs/reference/object-tracking-2d-siamrpn.md
new file mode 100644
index 0000000000..6953be0a40
--- /dev/null
+++ b/docs/reference/object-tracking-2d-siamrpn.md
@@ -0,0 +1,221 @@
+## SiamRPNLearner module
+
+The *SiamRPN* module contains the *SiamRPNLearner* class, which inherits from the abstract class *Learner*.
+
+### Class SiamRPNLearner
+Bases: `engine.learners.Learner`
+
+The *SiamRPNLearner* class is a wrapper of the SiamRPN detector[[1]](#siamrpn-1)
+[GluonCV implementation](https://github.com/dmlc/gluon-cv/tree/master/gluoncv/model_zoo/siamrpn).
+It can be used to perform object tracking on videos (inference) as well as train new object tracking models.
+
+The [SiamRPNLearner](/src/opendr/perception/object_tracking_2d/siamrpn/siamrpn_learner.py) class has the following public methods:
+
+#### `SiamRPNLearner` constructor
+```python
+SiamRPNLearner(self, device, n_epochs, num_workers, warmup_epochs, lr, weight_decay, momentum, cls_weight, loc_weight, batch_size, temp_path)
+```
+
+Parameters:
+
+- **device**: *{'cuda', 'cpu'}, default='cuda'*\
+ Specifies the device to be used.
+- **n_epochs**: *int, default=50*\
+ Specifies the number of epochs to be used during training.
+- **num_workers**: *int, default=1*\
+ Specifies the number of workers to be used when loading datasets or performing evaluation.
+- **warmup_epochs**: *int, default=2*\
+ Specifies the number of epochs during which the learning rate is annealed to **lr**.
+- **lr**: *float, default=0.001*\
+ Specifies the initial learning rate to be used during training.
+- **weight_decay**: *float, default=0*\
+ Specifies the weight decay to be used during training.
+- **momentum**: *float, default=0.9*\
+ Specifies the momentum to be used for optimizer during training.
+- **cls_weight**: *float, default=1.*\
+ Specifies the classification loss multiplier to be used for optimizer during training.
+- **loc_weight**: *float, default=1.2*\
+ Specifies the localization loss multiplier to be used for optimizer during training.
+- **batch_size**: *int, default=32*\
+ Specifies the batch size to be used during training.
+- **temp_path**: *str, default=''*\
+ Specifies a path to be used for data downloading.
+
+
+#### `SiamRPNLearner.fit`
+```python
+SiamRPNLearner.fit(self, dataset, log_interval, n_gpus, verbose)
+```
+
+This method is used to train the algorithm on a `DetectionDataset` or `ExternalDataset` dataset and also performs evaluation on a validation set using the trained model.
+Returns a dictionary containing stats regarding the training process.
+
+Parameters:
+
+- **dataset**: *object*\
+ Object that holds the training dataset.
+- **log_interval**: *int, default=20*\
+ Training loss is printed in stdout after this amount of iterations.
+- **n_gpus**: *int, default=1*\
+ If CUDA is enabled, training can be performed on multiple GPUs as set by this parameter.
+- **verbose**: *bool, default=True*\
+ If True, enables maximum verbosity.
+
+#### `SiamRPNLearner.eval`
+```python
+SiamRPNLearner.eval(self, dataset)
+```
+
+Performs evaluation on a dataset. The OTB dataset is currently supported.
+
+Parameters:
+
+- **dataset**: *object*\
+ Object that holds dataset to perform evaluation on.
+ Expected type is `ExternalDataset` with `otb2015` dataset type.
+
+#### `SiamRPNLearner.infer`
+```python
+SiamRPNLearner.infer(self, img, init_box)
+```
+
+Performs inference on a single image.
+If the `init_box` is provided, the tracker is initialized.
+If not, the current position of the target is updated by running inference on the image.
+
+Parameters:
+
+- **img**: *object*\
+ Object of type engine.data.Image.
+- **init_box**: *object, default=None*\
+ Object of type engine.target.TrackingAnnotation.
+ If provided, it is used to initialize the tracker.
+
+#### `SiamRPNLearner.save`
+```python
+SiamRPNLearner.save(self, path, verbose)
+```
+
+Saves a model in OpenDR format at the specified path.
+The model name is extracted from the base folder in the specified path.
+
+Parameters:
+
+- **path**: *str*\
+ Specifies the folder where the model will be saved.
+ The model name is extracted from the base folder of this path.
+- **verbose**: *bool default=False*\
+ If True, enables maximum verbosity.
+
+#### `SiamRPNLearner.load`
+```python
+SiamRPNLearner.load(self, path, verbose)
+```
+
+Loads a model which was previously saved in OpenDR format at the specified path.
+
+Parameters:
+
+- **path**: *str*\
+ Specifies the folder where the model will be loaded from.
+- **verbose**: *bool default=False*\
+ If True, enables maximum verbosity.
+
+#### `SiamRPNLearner.download`
+```python
+SiamRPNLearner.download(self, path, mode, verbose, url, overwrite)
+```
+
+Downloads data needed for the various functions of the learner, e.g., pre-trained models as well as test data.
+
+Parameters:
+
+- **path**: *str, default=None*\
+ Specifies the folder where data will be downloaded.
+ If *None*, the *self.temp_path* directory is used instead.
+- **mode**: *{'pretrained', 'video', 'test_data', 'otb2015'}, default='pretrained'*\
+ If *'pretrained'*, downloads a pre-trained detector model.
+ If *'video'*, downloads a single video to perform inference on.
+ If *'test_data'* downloads a dummy version of the OTB dataset for testing purposes.
+ If *'otb2015'*, attempts to download the OTB dataset (100 videos).
+ This process lasts a long time.
+- **verbose**: *bool default=False*\
+ If True, enables maximum verbosity.
+- **url**: *str, default=OpenDR FTP URL*\
+ URL of the FTP server.
+- **overwrite**: *bool, default=False*\
+ If True, files will be re-downloaded if they already exists.
+ This can solve some issues with large downloads.
+
+#### Examples
+
+* **Training example using `ExternalDataset` objects**.
+ Training is supported solely via the `ExternalDataset` class.
+ See [class README](/src/opendr/perception/object_tracking_2d/siamrpn/README.md) for a list of supported datasets and presumed data directory structure.
+ Example training on COCO Detection dataset:
+ ```python
+ from opendr.engine.datasets import ExternalDataset
+ from opendr.perception.object_tracking_2d import SiamRPNLearner
+
+ dataset = ExternalDataset("/path/to/data/root", "coco")
+ learner = SiamRPNLearner(device="cuda", n_epochs=50, batch_size=32,
+ lr=1e-3)
+ learner.fit(dataset)
+ learner.save("siamrpn_custom")
+ ```
+
+* **Inference and result drawing example on a test mp4 video using OpenCV.**
+ ```python
+ import cv2
+ from opendr.engine.target import TrackingAnnotation
+ from opendr.perception.object_tracking_2d import SiamRPNLearner
+
+ learner = SiamRPNLearner(device="cuda")
+ learner.download(".", mode="pretrained")
+ learner.load("siamrpn_opendr")
+
+ learner.download(".", mode="video")
+ cap = cv2.VideoCapture("tc_Skiing_ce.mp4")
+
+ init_bbox = TrackingAnnotation(left=598, top=312, width=75, height=200, name=0, id=0)
+
+ frame_no = 0
+ while cap.isOpened():
+ ok, frame = cap.read()
+ if not ok:
+ break
+
+ if frame_no == 0:
+ # first frame, pass init_bbox to infer function to initialize the tracker
+ pred_bbox = learner.infer(frame, init_bbox)
+ else:
+ # after the first frame only pass the image to infer
+ pred_bbox = learner.infer(frame)
+
+ frame_no += 1
+
+ cv2.rectangle(frame, (pred_bbox.left, pred_bbox.top),
+ (pred_bbox.left + pred_bbox.width, pred_bbox.top + pred_bbox.height),
+ (0, 255, 255), 3)
+ cv2.imshow('Tracking Result', frame)
+ cv2.waitKey(1)
+
+ cv2.destroyAllWindows()
+ ```
+
+
+#### Performance evaluation
+
+We have measured the performance on the OTB2015 dataset in terms of success and FPS on an RTX 2070.
+```
+------------------------------------------------
+| Tracker name | Success | FPS |
+------------------------------------------------
+| siamrpn_alexnet_v2_otb15 | 0.668 | 132.1 |
+------------------------------------------------
+```
+
+#### References
+[1]
+High Performance Visual Tracking with Siamese Region Proposal Network,
+[PDF](https://openaccess.thecvf.com/content_cvpr_2018/papers/Li_High_Performance_Visual_CVPR_2018_paper.pdf).
diff --git a/docs/reference/opendr-ros-bridge.md b/docs/reference/opendr-ros-bridge.md
new file mode 100755
index 0000000000..a98666a3ab
--- /dev/null
+++ b/docs/reference/opendr-ros-bridge.md
@@ -0,0 +1,431 @@
+## opendr_bridge package
+
+
+This *opendr_bridge* package provides an interface to convert OpenDR data types and targets into ROS-compatible ones similar to CvBridge.
+The *ROSBridge* class provides two methods for each data type X:
+1. *from_ros_X()* : converts the ROS equivalent of X into OpenDR data type
+2. *to_ros_X()* : converts the OpenDR data type into the ROS equivalent of X
+
+### Class ROSBridge
+
+The *ROSBridge* class provides an interface to convert OpenDR data types and targets into ROS-compatible ones.
+
+The ROSBridge class has the following public methods:
+
+#### `ROSBridge` constructor
+The constructor only initializes the state of the class and does not require any input arguments.
+```python
+ROSBridge(self)
+```
+
+#### `ROSBridge.from_ros_image`
+
+```python
+ROSBridge.from_ros_image(self,
+ message,
+ encoding)
+```
+
+This method converts a ROS Image into an OpenDR image.
+
+Parameters:
+
+- **message**: *sensor_msgs.msg.Img*\
+ ROS image to be converted into an OpenDR image.
+- **encoding**: *str, default='bgr8'*\
+ Encoding to be used for the conversion (inherited from CvBridge).
+
+#### `ROSBridge.to_ros_image`
+
+```python
+ROSBridge.to_ros_image(self,
+ image,
+ encoding)
+```
+
+This method converts an OpenDR image into a ROS image.
+
+Parameters:
+
+- **message**: *engine.data.Image*\
+ OpenDR image to be converted into a ROS message.
+- **encoding**: *str, default='bgr8'*\
+ Encoding to be used for the conversion (inherited from CvBridge).
+
+#### `ROSBridge.from_ros_pose`
+
+```python
+ROSBridge.from_ros_pose(self,
+ ros_pose)
+```
+
+Converts an OpenDRPose2D message into an OpenDR Pose.
+
+Parameters:
+
+- **ros_pose**: *opendr_bridge.msg.OpenDRPose2D*\
+ ROS pose to be converted into an OpenDR Pose.
+
+#### `ROSBridge.to_ros_pose`
+
+```python
+ROSBridge.to_ros_pose(self,
+ pose)
+```
+Converts an OpenDR Pose into a OpenDRPose2D msg that can carry the same information, i.e. a list of keypoints,
+the pose detection confidence and the pose id.
+Each keypoint is represented as an OpenDRPose2DKeypoint with x, y pixel position on input image with (0, 0)
+being the top-left corner.
+
+Parameters:
+
+- **pose**: *engine.target.Pose*\
+ OpenDR Pose to be converted to ROS OpenDRPose2D.
+
+
+#### `ROSBridge.to_ros_category`
+
+```python
+ROSBridge.to_ros_category(self,
+ category)
+```
+Converts an OpenDR Category used for category recognition into a ROS ObjectHypothesis.
+
+Parameters:
+
+- **message**: *engine.target.Category*\
+ OpenDR Category used for category recognition to be converted to ROS ObjectHypothesis.
+
+#### `ROSBridge.to_ros_category_description`
+
+```python
+ROSBridge.to_ros_category_description(self,
+ category)
+```
+Converts an OpenDR Category into a ROS String.
+
+Parameters:
+
+- **message**: *engine.target.Category*\
+ OpenDR Category to be converted to ROS String.
+
+
+#### `ROSBridge.from_ros_category`
+
+```python
+ROSBridge.from_ros_category(self,
+ ros_hypothesis)
+```
+
+Converts a ROS ObjectHypothesis message into an OpenDR Category.
+
+Parameters:
+
+- **message**: *vision_msgs.msg.ObjectHypothesis*\
+ ROS ObjectHypothesis to be converted into an OpenDR Category.
+
+
+#### `ROSBridge.from_ros_face`
+
+```python
+ROSBridge.from_ros_face(self,
+ ros_hypothesis)
+```
+
+Converts a ROS ObjectHypothesis message into an OpenDR Category.
+
+Parameters:
+
+- **message**: *vision_msgs.msg.ObjectHypothesis*\
+ ROS ObjectHypothesis to be converted into an OpenDR Category.
+
+#### `ROSBridge.to_ros_face`
+
+```python
+ROSBridge.to_ros_face(self,
+ category)
+```
+Converts an OpenDR Category used for face recognition into a ROS ObjectHypothesis.
+
+Parameters:
+
+- **message**: *engine.target.Category*\
+ OpenDR Category used for face recognition to be converted to ROS ObjectHypothesis.
+
+#### `ROSBridge.to_ros_face_id`
+
+```python
+ROSBridge.to_ros_face_id(self,
+ category)
+```
+Converts an OpenDR Category into a ROS String.
+
+Parameters:
+
+- **message**: *engine.target.Category*\
+ OpenDR Category to be converted to ROS String.
+
+#### `ROSBridge.to_ros_boxes`
+
+```python
+ROSBridge.to_ros_boxes(self,
+ box_list)
+```
+Converts an OpenDR BoundingBoxList into a Detection2DArray msg that can carry the same information. Each bounding box is
+represented by its center coordinates as well as its width/height dimensions.
+
+#### `ROSBridge.from_ros_boxes`
+
+```python
+ROSBridge.from_ros_boxes(self,
+ ros_detections)
+```
+Converts a ROS Detection2DArray message with bounding boxes into an OpenDR BoundingBoxList
+
+#### `ROSBridge.from_ros_3Dpose`
+
+```python
+ROSBridge.from_ros_3Dpose(self,
+ ros_pose)
+```
+
+Converts a ROS pose into an OpenDR pose (used for a 3D pose).
+
+Parameters:
+
+- **ros_pose**: *geometry_msgs.msg.Pose*\
+ ROS pose to be converted into an OpenDR pose.
+
+#### `ROSBridge.to_ros_3Dpose`
+
+```python
+ROSBridge.to_ros_3Dpose(self,
+ opendr_pose)
+```
+Converts an OpenDR pose into a ROS ```geometry_msgs.msg.Pose``` message.
+
+Parameters:
+
+- **opendr_pose**: *engine.target.Pose*\
+ OpenDR pose to be converted to ```geometry_msgs.msg.Pose``` message.
+
+#### `ROSBridge.to_ros_mesh`
+
+```python
+ROSBridge.to_ros_mesh(self,
+ vertices, faces)
+```
+Converts a triangle mesh consisting of vertices, faces into a ROS ```shape_msgs.msg.Mesh``` message.
+
+Parameters:
+
+- **vertices**: *numpy.ndarray*\
+ Vertices (Nx3) of a triangle mesh.
+- **faces**: *numpy.ndarray*\
+ Faces (Nx3) of a triangle mesh.
+
+ #### `ROSBridge.to_ros_colors`
+
+```python
+ROSBridge.to_ros_colors(self,
+ colors)
+```
+Converts a list of colors into a list of ROS ```std_msgs.msg.colorRGBA``` messages.
+
+Parameters:
+
+- **colors**: *list of list of size 3*\
+ List of colors to be converted to a list of ROS colors.
+
+ #### `ROSBridge.from_ros_mesh`
+
+```python
+ROSBridge.from_ros_mesh(self,
+ ros_mesh)
+```
+Converts a ROS mesh into arrays of vertices and faces of a triangle mesh.
+
+Parameters:
+
+- **ros_mesh**: *shape_msgs.msg.Mesh*\
+
+ #### `ROSBridge.from_ros_colors`
+
+```python
+ROSBridge.from_ros_colors(self,
+ ros_colors)
+```
+Converts a list of ROS colors into an array (Nx3).
+
+Parameters:
+
+- **ros_colors**: list of *std_msgs.msg.colorRGBA*
+
+
+#### `ROSBridge.from_ros_image_to_depth`
+
+```python
+ROSBridge.from_ros_image_to_depth(self,
+ message,
+ encoding)
+```
+
+This method converts a ROS image message into an OpenDR grayscale depth image.
+
+Parameters:
+
+- **message**: *sensor_msgs.msg.Img*\
+ ROS image to be converted into an OpenDR image.
+- **encoding**: *str, default='mono16'*\
+ Encoding to be used for the conversion.
+
+#### `ROSBridge.from_category_to_rosclass`
+
+```python
+ROSBridge.from_category_to_rosclass(self,
+ prediction,
+ source_data)
+```
+This method converts an OpenDR Category object into Classification2D message with class label, confidence, timestamp and optionally corresponding input.
+
+Parameters:
+
+- **prediction**: *engine.target.Category*\
+ OpenDR Category object
+- **source_data**: *default=None*\
+ Corresponding input, default=None
+
+#### `ROSBridge.from_rosarray_to_timeseries`
+
+```python
+ROSBridge.from_rosarray_to_timeseries(self,
+ ros_array,
+ dim1,
+ dim2)
+```
+This method converts a ROS array into OpenDR Timeseries object.
+
+Parameters:
+
+- **ros_array**: *std_msgs.msg.Float32MultiArray*\
+ ROS array of data
+- **dim1**: *int*\
+ First dimension
+- **dim2**: *int*\
+ Second dimension
+
+#### `ROSBridge.from_ros_point_cloud`
+
+```python
+ROSBridge.from_ros_point_cloud(self, point_cloud)
+```
+
+Converts a ROS PointCloud message into an OpenDR PointCloud.
+
+Parameters:
+
+- **point_cloud**: *sensor_msgs.msg.PointCloud*\
+ ROS PointCloud to be converted.
+
+#### `ROSBridge.to_ros_point_cloud`
+
+```python
+ROSBridge.to_ros_point_cloud(self, point_cloud)
+```
+Converts an OpenDR PointCloud message into a ROS PointCloud.
+
+Parameters:
+
+- **point_cloud**: *engine.data.PointCloud*\
+ OpenDR PointCloud to be converted.
+
+#### `ROSBridge.from_ros_boxes_3d`
+
+```python
+ROSBridge.from_ros_boxes_3d(self, ros_boxes_3d, classes)
+```
+
+Converts a ROS Detection3DArray message into an OpenDR BoundingBox3D object.
+
+Parameters:
+
+- **ros_boxes_3d**: *vision_msgs.msg.Detection3DArray*\
+ The ROS boxes to be converted.
+
+- **classes**: *[str]*\
+ The array of classes to transform an index into a string name.
+
+#### `ROSBridge.to_ros_boxes_3d`
+
+```python
+ROSBridge.to_ros_boxes_3d(self, boxes_3d, classes)
+```
+Converts an OpenDR BoundingBox3DList object into a ROS Detection3DArray message.
+
+Parameters:
+
+- **boxes_3d**: *engine.target.BoundingBox3DList*\
+ The ROS boxes to be converted.
+
+- **classes**: *[str]*
+ The array of classes to transform from string name into an index.
+
+#### `ROSBridge.from_ros_tracking_annotation`
+
+```python
+ROSBridge.from_ros_tracking_annotation(self, ros_detections, ros_tracking_ids, frame)
+```
+
+Converts a pair of ROS messages with bounding boxes and tracking ids into an OpenDR TrackingAnnotationList.
+
+Parameters:
+
+- **ros_detections**: *sensor_msgs.msg.Detection2DArray*\
+ The boxes to be converted.
+- **ros_tracking_ids**: *std_msgs.msg.Int32MultiArray*\
+ The tracking ids corresponding to the boxes.
+- **frame**: *int, default=-1*\
+ The frame index to assign to the tracking boxes.
+
+#### `ROSBridge.to_ros_single_tracking_annotation`
+
+```python
+ROSBridge.to_ros_single_tracking_annotation(self, tracking_annotation)
+```
+
+Converts a `TrackingAnnotation` object to a `Detection2D` ROS message.
+This method is intended for single object tracking methods.
+
+Parameters:
+
+- **tracking_annotation**: *opendr.engine.target.TrackingAnnotation*\
+ The box to be converted.
+
+#### `ROSBridge.from_ros_single_tracking_annotation`
+
+```python
+ROSBridge.from_ros_single_tracking_annotation(self, ros_detection_box)
+```
+
+Converts a `Detection2D` ROS message object to a `TrackingAnnotation` object.
+This method is intended for single object tracking methods.
+
+Parameters:
+
+- **ros_detection_box**: *vision_msgs.Detection2D*\
+ The box to be converted.
+
+## ROS message equivalence with OpenDR
+1. `sensor_msgs.msg.Img` is used as an equivalent to `engine.data.Image`
+2. `opendr_bridge.msg.Pose` is used as an equivalent to `engine.target.Pose`
+3. `vision_msgs.msg.Detection2DArray` is used as an equivalent to `engine.target.BoundingBoxList`
+4. `vision_msgs.msg.Detection2D` is used as an equivalent to `engine.target.BoundingBox` and
+ to `engine.target.TrackingAnnotation` in single object tracking
+5. `geometry_msgs.msg.Pose` is used as an equivalent to `engine.target.Pose` for 3D poses conversion only.
+6. `vision_msgs.msg.Detection3DArray` is used as an equivalent to `engine.target.BoundingBox3DList`.
+7. `sensor_msgs.msg.PointCloud` is used as an equivalent to `engine.data.PointCloud`.
+
+## ROS services
+The following ROS services are implemented (`srv` folder):
+1. `opendr_bridge.OpenDRSingleObjectTracking`: can be used to initialize the tracking process of single
+ object trackers, by providing a `Detection2D` bounding box
\ No newline at end of file
diff --git a/docs/reference/rgbd-hand-gesture-learner.md b/docs/reference/rgbd-hand-gesture-learner.md
index d967b47391..93bdc40c0b 100644
--- a/docs/reference/rgbd-hand-gesture-learner.md
+++ b/docs/reference/rgbd-hand-gesture-learner.md
@@ -2,6 +2,20 @@
The *rgbd_hand_gesture_learner* module contains the *RgbdHandGestureLearner* class, which inherits from the abstract class *Learner*.
+On the table below you can find the gesture classes and their corresponding IDs:
+
+| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
+|:------:|:------:|:-----:|:----:|:----:|:--------------:|:--------------:|:----:|:---:|:-----:|:-----:|:---:|:----:|:-----:|:-------:|:---:|:-----:|
+| Class | COLLAB | Eight | Five | Four | Horiz HBL, HFR | Horiz HFL, HBR | Nine | One | Punch | Seven | Six | Span | Three | TimeOut | Two | XSign |
+
+The naming convention of the gestures classes is as follow:
+- V is used for vertical gestures, while H is used for horizontal gestures.
+- F identifies the version of the gesture where the front of the hand is facing the camera, while B identifies the version where the back of the hand is facing the camera.
+- R is used for right-hand gestures, while L is used for left-hand gestures.
+
+Below is an illustration image of hand gestures, the image is copied from [[1]](#dataset).
+![Hand gesture examples](images/hand_gesture_examples.png)
+
### Class RgbdHandGestureLearner
Bases: `opendr.engine.learners.Learner`
diff --git a/docs/reference/rosbridge.md b/docs/reference/ros2bridge.md
similarity index 94%
rename from docs/reference/rosbridge.md
rename to docs/reference/ros2bridge.md
index 6e19acbc51..d0c155e4d7 100755
--- a/docs/reference/rosbridge.md
+++ b/docs/reference/ros2bridge.md
@@ -59,25 +59,28 @@ ROSBridge.from_ros_pose(self,
ros_pose)
```
-Converts a ROS pose into an OpenDR pose.
+Converts an OpenDRPose2D message into an OpenDR Pose.
Parameters:
-- **message**: *ros_bridge.msg.Pose*\
- ROS pose to be converted into an OpenDR pose.
+- **ros_pose**: *ros_bridge.msg.OpenDRPose2D*\
+ ROS pose to be converted into an OpenDR Pose.
#### `ROSBridge.to_ros_pose`
```python
ROSBridge.to_ros_pose(self,
- ros_pose)
+ pose)
```
-Converts an OpenDR pose into a ROS pose.
+Converts an OpenDR Pose into a OpenDRPose2D msg that can carry the same information, i.e. a list of keypoints,
+the pose detection confidence and the pose id.
+Each keypoint is represented as an OpenDRPose2DKeypoint with x, y pixel position on input image with (0, 0)
+being the top-left corner.
Parameters:
-- **message**: *engine.target.Pose*\
- OpenDR pose to be converted to ROS pose.
+- **pose**: *engine.target.Pose*\
+ OpenDR Pose to be converted to ROS OpenDRPose2D.
#### `ROSBridge.to_ros_category`
diff --git a/docs/reference/semantic-segmentation.md b/docs/reference/semantic-segmentation.md
index 783b801810..9a0b0f2969 100644
--- a/docs/reference/semantic-segmentation.md
+++ b/docs/reference/semantic-segmentation.md
@@ -2,6 +2,11 @@
The *semantic segmentation* module contains the *BisenetLearner* class, which inherit from the abstract class *Learner*.
+On the table below you can find the detectable classes and their corresponding IDs:
+
+| Class | Bicyclist | Building | Car | Column Pole | Fence | Pedestrian | Road | Sidewalk | Sign Symbol | Sky | Tree | Unknown |
+|--------|-----------|----------|-----|-------------|-------|------------|------|----------|-------------|-----|------|---------|
+| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
### Class BisenetLearner
Bases: `engine.learners.Learner`
diff --git a/docs/reference/single-demonstration-grasping.md b/docs/reference/single-demonstration-grasping.md
index 7332a0adb0..a4d8f67dad 100644
--- a/docs/reference/single-demonstration-grasping.md
+++ b/docs/reference/single-demonstration-grasping.md
@@ -113,7 +113,7 @@ $ make install_runtime_dependencies
after installing dependencies, the user must source the workspace in the shell in order to detect the packages:
```
-$ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+$ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
```
## Demos
@@ -125,7 +125,7 @@ Three different nodes must be launched consecutively in order to properly run th
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
4. $ export WEBOTS_HOME=/usr/local/webots
5. $ roslaunch single_demo_grasping_demo panda_sim.launch
```
@@ -134,7 +134,7 @@ Three different nodes must be launched consecutively in order to properly run th
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
4. $ roslaunch single_demo_grasping_demo camera_stream_inference.launch
```
@@ -142,7 +142,7 @@ Three different nodes must be launched consecutively in order to properly run th
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
4. $ roslaunch single_demo_grasping_demo panda_sim_control.launch
```
@@ -150,14 +150,14 @@ Three different nodes must be launched consecutively in order to properly run th
You can find an example on how to use the learner class to run inference and see the result in the following directory:
```
-$ cd projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
+$ cd projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
```
simply run:
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
-4. $ cd projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
+4. $ cd projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
5. $ ./single_demo_inference.py
```
diff --git a/docs/reference/skeleton-based-action-recognition.md b/docs/reference/skeleton-based-action-recognition.md
index 241ff902d1..21eb21e5e5 100644
--- a/docs/reference/skeleton-based-action-recognition.md
+++ b/docs/reference/skeleton-based-action-recognition.md
@@ -2,29 +2,30 @@
The *skeleton_based_action_recognition* module contains the *SpatioTemporalGCNLearner* and *ProgressiveSpatioTemporalGCNLearner* classes, which inherits from the abstract class *Learner*.
-#### Data preparation
- Download the NTU-RGB+D skeleton data from [here](https://github.com/shahroudy/NTURGB-D) and the kinetics-skeleton dataset from [here](https://drive.google.com/drive/folders/1SPQ6FmFsjGg3f59uCWfdUWI-5HJM_YhZ).
- Then run the following function to preprocess the NTU-RGB+D and Kinetics skeleton data for ST-GCN methods:
-
- ```python
- from opendr.perception.skeleton_based_action_recognition.algorithm.datasets import ntu_gendata
- from opendr.perception.skeleton_based_action_recognition.algorithm.datasets import kinetics_gendata
- python3 ntu_gendata.py --data_path ./data/nturgbd_raw_skeletons --ignored_sample_path ./algorithm/datasets/ntu_samples_with_missing_skeletons.txt --out_folder ./data/preprocessed_nturgbd
- python3 kinetics_gendata.py --data_path ./data/kinetics_raw_skeletons --out_folder ./data/preprocessed_kinetics_skeletons
- ```
- You need to specify the path of the downloaded data as `--data_path` and the path of the processed data as `--out_folder`.
- ntu_samples_with_missing_skeletons.txt provides the NTU-RGB+D sample indices which don't contain any skeleton.
- You need to specify the path of this file with --ignored_sample_path.
+#### Data preparation
+Download the NTU-RGB+D skeleton data from [here](https://github.com/shahroudy/NTURGB-D) and the kinetics-skeleton dataset from [here](https://drive.google.com/drive/folders/1SPQ6FmFsjGg3f59uCWfdUWI-5HJM_YhZ).
+Then run the following function to preprocess the NTU-RGB+D and Kinetics skeleton data for ST-GCN methods:
+
+```bash
+cd src/opendr/perception/skeleton_based_action_recognition/algorithm/datasets
+
+python3 ntu_gendata.py --data_path ./data/nturgbd_raw_skeletons --ignored_sample_path ./algorithm/datasets/ntu_samples_with_missing_skeletons.txt --out_folder ./data/preprocessed_nturgbd
+
+python3 kinetics_gendata.py --data_path ./data/kinetics_raw_skeletons --out_folder ./data/preprocessed_kinetics_skeletons
+```
+You need to specify the path of the downloaded data as `--data_path` and the path of the processed data as `--out_folder`.
+ntu_samples_with_missing_skeletons.txt provides the NTU-RGB+D sample indices which don't contain any skeleton.
+You need to specify the path of this file with --ignored_sample_path.
### Class SpatioTemporalGCNLearner
Bases: `engine.learners.Learner`
-The *SpatioTemporalGCNLearner* class is a wrapper of the ST-GCN [[1]](#1) and the proposed methods TA-GCN [[2]](#2) and ST-BLN [[3]](#3) for Skeleton-based Human
+The *SpatioTemporalGCNLearner* class is a wrapper of the ST-GCN [[1]](#1) and the proposed methods TA-GCN [[2]](#2) and ST-BLN [[3]](#3) for Skeleton-based Human
Action Recognition.
This implementation of ST-GCN can be found in [OpenMMLAB toolbox](
https://github.com/open-mmlab/mmskeleton/tree/b4c076baa9e02e69b5876c49fa7c509866d902c7).
-It can be used to perform the baseline method ST-GCN and the proposed methods TA-GCN [[2]](#2) and ST-BLN [[3]](#3) for skeleton-based action recognition.
-The TA-GCN and ST-BLN methods are proposed on top of ST-GCN and make it more efficient in terms of number of model parameters and floating point operations.
+It can be used to perform the baseline method ST-GCN and the proposed methods TA-GCN [[2]](#2) and ST-BLN [[3]](#3) for skeleton-based action recognition.
+The TA-GCN and ST-BLN methods are proposed on top of ST-GCN and make it more efficient in terms of number of model parameters and floating point operations.
The [SpatioTemporalGCNLearner](/src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py) class has the
following public methods:
@@ -35,62 +36,64 @@ SpatioTemporalGCNLearner(self, lr, batch_size, optimizer_name, lr_schedule,
checkpoint_after_iter, checkpoint_load_iter, temp_path,
device, num_workers, epochs, experiment_name,
device_ind, val_batch_size, drop_after_epoch,
- start_epoch, dataset_name, num_class, num_point,
- num_person, in_channels, method_name,
+ start_epoch, dataset_name, num_class, num_point,
+ num_person, in_channels, method_name,
stbln_symmetric, num_frames, num_subframes)
```
Constructor parameters:
-- **lr**: *float, default=0.1*
+- **lr**: *float, default=0.1*\
Specifies the initial learning rate to be used during training.
-- **batch_size**: *int, default=128*
+- **batch_size**: *int, default=128*\
Specifies number of skeleton sequences to be bundled up in a batch during training. This heavily affects memory usage, adjust according to your system.
-- **optimizer_name**: *str {'sgd', 'adam'}, default='sgd'*
+- **optimizer_name**: *str {'sgd', 'adam'}, default='sgd'*\
Specifies the optimizer type that should be used.
-- **lr_schedule**: *str, default=' '*
+- **lr_schedule**: *str, default=' '*\
Specifies the learning rate scheduler.
-- **checkpoint_after_iter**: *int, default=0*
+- **checkpoint_after_iter**: *int, default=0*\
Specifies per how many training iterations a checkpoint should be saved. If it is set to 0 no checkpoints will be saved.
-- **checkpoint_load_iter**: *int, default=0*
+- **checkpoint_load_iter**: *int, default=0*\
Specifies which checkpoint should be loaded. If it is set to 0, no checkpoints will be loaded.
-- **temp_path**: *str, default=''*
+- **temp_path**: *str, default=''*
Specifies a path where the algorithm saves the checkpoints and onnx optimized model (if needed).
-- **device**: *{'cpu', 'cuda'}, default='cuda'*
+- **device**: *{'cpu', 'cuda'}, default='cuda'*\
Specifies the device to be used.
-- **num_workers**: *int, default=32*
+- **num_workers**: *int, default=32*\
Specifies the number of workers to be used by the data loader.
-- **epochs**: *int, default=50*
+- **epochs**: *int, default=50*\
Specifies the number of epochs the training should run for.
-- **experiment_name**: *str, default='stgcn_nturgbd'*
+- **experiment_name**: *str, default='stgcn_nturgbd'*\
String name to attach to checkpoints.
-- **device_ind**: *list, default=[0]*
- List of GPU indices to be used if the device is 'cuda'.
-- **val_batch_size**: *int, default=256*
+- **device_ind**: *list, default=[0]*\
+ List of GPU indices to be used if the device is 'cuda'.
+- **val_batch_size**: *int, default=256*\
Specifies number of skeleton sequences to be bundled up in a batch during evaluation. This heavily affects memory usage, adjust according to your system.
-- **drop_after_epoch**: *list, default=[30,40]*
- List of epoch numbers in which the optimizer drops the learning rate.
-- **start_epoch**: *int, default=0*
- Specifies the starting epoch number for training.
-- **dataset_name**: *str {'kinetics', 'nturgbd_cv', 'nturgbd_cs'}, default='nturgbd_cv'*
- Specifies the name of dataset that is used for training and evaluation.
-- **num_class**: *int, default=60*
- Specifies the number of classes for the action dataset.
-- **num_point**: *int, default=25*
- Specifies the number of body joints in each skeleton.
-- **num_person**: *int, default=2*
+- **drop_after_epoch**: *list, default=[30,40]*\
+ List of epoch numbers in which the optimizer drops the learning rate.
+- **start_epoch**: *int, default=0*\
+ Specifies the starting epoch number for training.
+- **dataset_name**: *str {'kinetics', 'nturgbd_cv', 'nturgbd_cs'}, default='nturgbd_cv'*
+ Specifies the name of dataset that is used for training and evaluation.
+- **num_class**: *int, default=60*\
+ Specifies the number of classes for the action dataset.
+- **num_point**: *int, default=25*\
+ Specifies the number of body joints in each skeleton.
+- **num_person**: *int, default=2*\
Specifies the number of body skeletons in each frame.
-- **in_channels**: *int, default=3*
- Specifies the number of input channels for each body joint.
-- **graph_type**: *str {'kinetics', 'ntu'}, default='ntu'*
- Specifies the type of graph structure associated with the dataset.
-- **method_name**: *str {'stgcn', 'stbln', 'tagcn'}, default='stgcn'*
- Specifies the name of method to be trained and evaluated. For each method, a different model is trained.
-- **stbln_symmetric**: *bool, default=False*
- Specifies if the random graph in stbln method is symmetric or not. This parameter is used if method_name is 'stbln'.
-- **num_frames**: *int, default=300*
- Specifies the number of frames in each skeleton sequence. This parameter is used if the method_name is 'tagcn'.
-- **num_subframes**: *int, default=100*
- Specifies the number of sub-frames that are going to be selected by the tagcn model. This parameter is used if the method_name is 'tagcn'.
+- **in_channels**: *int, default=3*\
+ Specifies the number of input channels for each body joint.
+- **graph_type**: *str {'kinetics', 'ntu'}, default='ntu'*\
+ Specifies the type of graph structure associated with the dataset.
+- **method_name**: *str {'stgcn', 'stbln', 'tagcn'}, default='stgcn'*\
+ Specifies the name of method to be trained and evaluated.
+ For each method, a different model is trained.
+- **stbln_symmetric**: *bool, default=False*\
+ Specifies if the random graph in stbln method is symmetric or not.
+ This parameter is used if method_name is 'stbln'.
+- **num_frames**: *int, default=300*\
+ Specifies the number of frames in each skeleton sequence. This parameter is used if the method_name is 'tagcn'.
+- **num_subframes**: *int, default=100*\
+ Specifies the number of sub-frames that are going to be selected by the tagcn model. This parameter is used if the method_name is 'tagcn'.
#### `SpatioTemporalGCNLearner.fit`
@@ -101,41 +104,43 @@ SpatioTemporalGCNLearner.fit(self, dataset, val_dataset, logging_path, silent, v
val_labels_filename, skeleton_data_type)
```
This method is used for training the algorithm on a train dataset and validating on a val dataset.
+
Parameters:
-- **dataset**: *object*
+
+- **dataset**: *object*\
Object that holds the training dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
-- **val_dataset**: *object*
- Object that holds the validation dataset.
-- **logging_path**: *str, default=''*
+- **val_dataset**: *object*\
+ Object that holds the validation dataset.
+- **logging_path**: *str, default=''*\
Path to save TensorBoard log files and the training log files.
- If set to None or '', TensorBoard logging is disabled and no log file is created.
-- **silent**: *bool, default=False*
+ If set to None or '', TensorBoard logging is disabled and no log file is created.
+- **silent**: *bool, default=False*
If set to True, disables all printing of training progress reports and other information to STDOUT.
-- **verbose**: *bool, default=True***
+- **verbose**: *bool, default=True*\
If set to True, enables the maximum verbosity.
-- **momentum**: *float, default=0.9*
- Specifies the momentum value for optimizer.
-- **nesterov**: *bool, default=True***
- If set to true, the optimizer uses Nesterov's momentum.
-- **weight_decay**: *float, default=0.0001***
- Specifies the weight_decay value of the optimizer.
-- **train_data_filename**: *str, default='train_joints.npy'*
- Filename that contains the training data.
+- **momentum**: *float, default=0.9*\
+ Specifies the momentum value for optimizer.
+- **nesterov**: *bool, default=True*\
+ If set to true, the optimizer uses Nesterov's momentum.
+- **weight_decay**: *float, default=0.0001*\
+ Specifies the weight_decay value of the optimizer.
+- **train_data_filename**: *str, default='train_joints.npy'*\
+ Filename that contains the training data.
This file should be contained in the dataset path provided.
Note that this is a file name, not a path.
-- **train_labels_filename**: *str, default='train_labels.pkl'*
- Filename of the labels .pkl file.
+- **train_labels_filename**: *str, default='train_labels.pkl'*\
+ Filename of the labels .pkl file.
This file should be contained in the dataset path provided.
-- **val_data_filename**: *str, default='val_joints.npy'*
+- **val_data_filename**: *str, default='val_joints.npy'*\
Filename that contains the validation data.
This file should be contained in the dataset path provided.
Note that this is a filename, not a path.
-- **val_labels_filename**: *str, default='val_labels.pkl'*
+- **val_labels_filename**: *str, default='val_labels.pkl'*\
Filename of the validation labels .pkl file.
This file should be contained in the dataset path provided.
-- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*
- The data stream that should be used for training and evaluation.
+- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*\
+ The data stream that should be used for training and evaluation.
#### `SpatioTemporalGCNLearner.eval`
```python
@@ -145,55 +150,58 @@ SpatioTemporalGCNLearner.eval(self, val_dataset, val_loader, epoch, silent, verb
```
This method is used to evaluate a trained model on an evaluation dataset.
-Returns a dictionary containing stats regarding evaluation.
+Returns a dictionary containing stats regarding evaluation.
+
Parameters:
-- **val_dataset**: *object*
+
+- **val_dataset**: *object*\
Object that holds the evaluation dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
-- **val_loader**: *object, default=None*
+- **val_loader**: *object, default=None*\
Object that holds a Python iterable over the evaluation dataset.
Object of `torch.utils.data.DataLoader` class.
-- **epoch**: *int, default=0*
- The training epoch in which the model is evaluated.
-- **silent**: *bool, default=False*
+- **epoch**: *int, default=0*\
+ The training epoch in which the model is evaluated.
+- **silent**: *bool, default=False*\
If set to True, disables all printing of evaluation progress reports and other information to STDOUT.
-- **verbose**: *bool, default=True*
+- **verbose**: *bool, default=True*\
If set to True, enables the maximum verbosity.
-- **val_data_filename**: *str, default='val_joints.npy'*
+- **val_data_filename**: *str, default='val_joints.npy'*\
Filename that contains the validation data.
This file should be contained in the dataset path provided.
Note that this is a filename, not a path.
-- **val_labels_filename**: *str, default='val_labels.pkl'*
+- **val_labels_filename**: *str, default='val_labels.pkl'*\
Filename of the validation labels .pkl file.
This file should be contained in the dataset path provided.
-- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*
- The data stream that should be used for training and evaluation.
-- **save_score**: *bool, default=False*
- If set to True, it saves the classification score of all samples in differenc classes
- in a log file. Default to False.
-- **wrong_file**: *str, default=None*
- If set to True, it saves the results of wrongly classified samples. Default to False.
-- **result_file**: *str, default=None*
- If set to True, it saves the classification results of all samples. Default to False.
-- **show_topk**: *list, default=[1, 5]*
- Is set to a list of integer numbers defining the k in top-k accuracy. Default is set to [1,5].
-
+- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*\
+ The data stream that should be used for training and evaluation.
+- **save_score**: *bool, default=False*\
+ If set to True, it saves the classification score of all samples in different classes
+ in a log file.
+- **wrong_file**: *str, default=None*\
+ If set to True, it saves the results of wrongly classified samples.
+- **result_file**: *str, default=None*\
+ If set to True, it saves the classification results of all samples.
+- **show_topk**: *list, default=[1, 5]*\
+ Is set to a list of integer numbers defining the k in top-k accuracy.
+
#### `SpatioTemporalGCNLearner.init_model`
```python
SpatioTemporalGCNLearner.init_model(self)
```
-This method is used to initialize the imported model and its loss function.
-
+This method is used to initialize the imported model and its loss function.
+
#### `SpatioTemporalGCNLearner.infer`
```python
SpatioTemporalGCNLearner.infer(self, SkeletonSeq_batch)
```
-This method is used to perform action recognition on a sequence of skeletons.
-It returns the action category as an object of `engine.target.Category` if a proper input object `engine.data.SkeletonSequence` is given.
+This method is used to perform action recognition on a sequence of skeletons.
+It returns the action category as an object of `engine.target.Category` if a proper input object `engine.data.SkeletonSequence` is given.
Parameters:
-- **SkeletonSeq_batch**: *object***
+
+- **SkeletonSeq_batch**: *object*\
Object of type engine.data.SkeletonSequence.
#### `SpatioTemporalGCNLearner.save`
@@ -201,20 +209,18 @@ Parameters:
SpatioTemporalGCNLearner.save(self, path, model_name, verbose)
```
This method is used to save a trained model.
-Provided with the path "/my/path" (absolute or relative), it creates the "path" directory, if it does not already
-exist. Inside this folder, the model is saved as "model_name.pt" and the metadata file as "model_name.json". If the directory
-already exists, the "model_name.pt" and "model_name.json" files are overwritten.
+Provided with the path "/my/path" (absolute or relative), it creates the "path" directory, if it does not already exist.
+Inside this folder, the model is saved as "model_name.pt" and the metadata file as "model_name.json". If the directory already exists, the "model_name.pt" and "model_name.json" files are overwritten.
-If [`self.optimize`](/src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py#L539) was run previously, it saves the optimized ONNX model in
-a similar fashion with an ".onnx" extension, by copying it from the self.temp_path it was saved previously
-during conversion.
+If [`self.optimize`](/src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py#L539) was run previously, it saves the optimized ONNX model in a similar fashion with an ".onnx" extension, by copying it from the self.temp_path it was saved previously during conversion.
Parameters:
-- **path**: *str*
+
+- **path**: *str*\
Path to save the model.
-- **model_name**: *str*
- The file name to be saved.
-- **verbose**: *bool, default=False*
+- **model_name**: *str*\
+ The file name to be saved.
+- **verbose**: *bool, default=False*\
If set to True, prints a message on success.
#### `SpatioTemporalGCNLearner.load`
@@ -226,11 +232,12 @@ This method is used to load a previously saved model from its saved folder.
Loads the model from inside the directory of the path provided, using the metadata .json file included.
Parameters:
-- **path**: *str*
+
+- **path**: *str*\
Path of the model to be loaded.
-- **model_name**: *str*
- The file name to be loaded.
-- **verbose**: *bool, default=False*
+- **model_name**: *str*\
+ The file name to be loaded.
+- **verbose**: *bool, default=False*\
If set to True, prints a message on success.
@@ -242,7 +249,8 @@ SpatioTemporalGCNLearner.optimize(self, do_constant_folding)
This method is used to optimize a trained model to ONNX format which can be then used for inference.
Parameters:
-- **do_constant_folding**: *bool, default=False*
+
+- **do_constant_folding**: *bool, default=False*\
ONNX format optimization.
If True, the constant-folding optimization is applied to the model during export.
Constant-folding optimization will replace some of the operations that have all constant inputs, with pre-computed constant nodes.
@@ -255,27 +263,29 @@ SpatioTemporalGCNLearner.multi_stream_eval(self, dataset, scores, data_filename,
labels_filename, skeleton_data_type,
verbose, silent)
```
-This method is used to ensemble the classification results of the model on two or more data streams like joints, bones and motions.
-It returns the top-k classification performance of ensembled model.
+This method is used to ensemble the classification results of the model on two or more data streams like joints, bones and motions.
+It returns the top-k classification performance of ensembled model.
Parameters:
-- **dataset**: *object*
+
+- **dataset**: *object*\
Object that holds the dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
-- **score**: *list*
- A list of score arrays. Each array in the list contains the evaluation results for a data stream.
-- **data_filename**: *str, default='val_joints.npy'*
+- **score**: *list*\
+ A list of score arrays.
+ Each array in the list contains the evaluation results for a data stream.
+- **data_filename**: *str, default='val_joints.npy'*\
Filename that contains the validation data.
This file should be contained in the dataset path provided.
Note that this is a filename, not a path.
-- **labels_filename**: *str, default='val_labels.pkl'*
+- **labels_filename**: *str, default='val_labels.pkl'*\
Filename of the validation labels .pkl file.
This file should be contained in the dataset path provided.
-- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*
- The data stream that should be used for training and evaluation.
-- **silent**: *bool, default=False*
+- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*\
+ The data stream that should be used for training and evaluation.
+- **silent**: *bool, default=False*\
If set to True, disables all printing of evaluation progress reports and other information to STDOUT.
-- **verbose**: *bool, default=True*
+- **verbose**: *bool, default=True*\
If set to True, enables the maximum verbosity.
@@ -285,37 +295,36 @@ Parameters:
SpatioTemporalGCNLearner.download(self, path, mode, verbose, url, file_name)
```
-Download utility for various skeleton-based action recognition components. Downloads files depending on mode and
-saves them in the path provided. It supports downloading:
-1. the pretrained weights for stgcn, tagcn and stbln models.
-2. a dataset containing one or more skeleton sequences and its labels.
+Download utility for various skeleton-based action recognition components. Downloads files depending on mode and saves them in the path provided. It supports downloading:
+1. the pretrained weights for stgcn, tagcn and stbln models.
+2. a dataset containing one or more skeleton sequences and its labels.
Parameters:
-- **path**: *str, default=None*
+
+- **path**: *str, default=None*\
Local path to save the files, defaults to self.parent_dir if None.
-- **mode**: *str, default="pretrained"*
+- **mode**: *str, default="pretrained"*\
What file to download, can be one of "pretrained", "train_data", "val_data", "test_data"
-- **verbose**: *bool, default=False*
+- **verbose**: *bool, default=False*\
Whether to print messages in the console.
-- **url**: *str, default=OpenDR FTP URL*
+- **url**: *str, default=OpenDR FTP URL*\
URL of the FTP server.
-- **file_name**: *str*
- The name of the file containing the pretrained model.
-
+- **file_name**: *str*\
+ The name of the file containing the pretrained model.
#### Examples
-* **Training example using an `ExternalDataset`**.
+* **Training example using an `ExternalDataset`**.
The training and evaluation dataset should be present in the path provided, along with the labels file.
The `batch_size` argument should be adjusted according to available memory.
```python
from opendr.perception.skeleton_based_action_recognition.spatio_temporal_gcn_learner import SpatioTemporalGCNLearner
from opendr.engine.datasets import ExternalDataset
-
+
training_dataset = ExternalDataset(path='./data/preprocessed_nturgbd/xview', dataset_type='NTURGBD')
validation_dataset = ExternalDataset(path='./data/preprocessed_nturgbd/xview', dataset_type='NTURGBD')
-
+
stgcn_learner = SpatioTemporalGCNLearner(temp_path='./parent_dir',
batch_size=64, epochs=50,
checkpoint_after_iter=10, val_batch_size=128,
@@ -330,9 +339,9 @@ Parameters:
skeleton_data_type='joint')
stgcn_learner.save(path='./saved_models/stgcn_nturgbd_cv_checkpoints', model_name='test_stgcn')
```
- In a similar manner train the TA-GCN model by specifying the number of important frames that the model selects as num_subframes.
- The number of frames in both NTU-RGB+D and Kinetics-skeleton is 300.
-
+ In a similar manner train the TA-GCN model by specifying the number of important frames that the model selects as num_subframes.
+ The number of frames in both NTU-RGB+D and Kinetics-skeleton is 300.
+
```python
tagcn_learner = SpatioTemporalGCNLearner(temp_path='./parent_dir',
batch_size=64, epochs=50,
@@ -348,9 +357,9 @@ Parameters:
skeleton_data_type='joint')
tagcn_learner.save(path='./saved_models/tagcn_nturgbd_cv_checkpoints', model_name='test_tagcn')
```
-
- For training the ST-BLN model, set the method_name to 'stbln' and specify if the model uses a symmetric attention matrix or not by setting stbln_symmetric to True or False.
-
+
+ For training the ST-BLN model, set the method_name to 'stbln' and specify if the model uses a symmetric attention matrix or not by setting stbln_symmetric to True or False.
+
```python
stbln_learner = SpatioTemporalGCNLearner(temp_path='./parent_dir',
@@ -367,7 +376,7 @@ Parameters:
skeleton_data_type='joint')
stbln_learner.save(path='./saved_models/stbln_nturgbd_cv_checkpoints', model_name='test_stbln')
```
-
+
* **Inference on a test skeleton sequence**
```python
@@ -381,15 +390,15 @@ Parameters:
method_name='stgcn')
# Download the default pretrained stgcn model in the parent_dir
stgcn_learner.download(
- mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_stgcn')
-
+ mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_stgcn')
+
stgcn_learner.load('./parent_dir/pretrained_models', model_name='pretrained_stgcn')
- test_data_path = stgcn_learner.download(mode="test_data") # Download a test data
+ test_data_path = stgcn_learner.download(mode="test_data") # Download a test data
test_data = numpy.load(test_data_path)
action_category = stgcn_learner.infer(test_data)
-
+
```
-
+
* **Optimization example for a previously trained model.**
Inference can be run with the trained model after running self.optimize.
```python
@@ -403,23 +412,21 @@ Parameters:
experiment_name='stgcn_nturgbd',
method_name='stgcn')
stgcn_learner.download(
- mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_stgcn')
-
+ mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_stgcn')
+
stgcn_learner.load(path='./parent_dir/pretrained_models', file_name='pretrained_stgcn')
stgcn_learner.optimize(do_constant_folding=True)
stgcn_learner.save(path='./parent_dir/optimized_model', model_name='optimized_stgcn')
```
- The inference and optimization can be performed for TA-GCN and ST-BLN methods in a similar manner only by specifying the method_name to 'tagcn' or 'stbln', respectively in the learner class constructor.
+ The inference and optimization can be performed for TA-GCN and ST-BLN methods in a similar manner only by specifying the method_name to 'tagcn' or 'stbln', respectively in the learner class constructor.
### Class ProgressiveSpatioTemporalGCNLearner
Bases: `engine.learners.Learner`
-The *ProgressiveSpatioTemporalGCNLearner* class is an implementation of the proposed method PST-GCN [[4]](#4) for Skeleton-based Human
-Action Recognition.
-It finds an optimized and data dependant spatio-temporal graph convolutional network topology for skeleton-based action recognition.
-The [ProgressiveSpatioTemporalGCNLearner](/src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py) class has the
-following public methods:
+The *ProgressiveSpatioTemporalGCNLearner* class is an implementation of the proposed method PST-GCN [[4]](#4) for Skeleton-based Human Action Recognition.
+It finds an optimized and data dependant spatio-temporal graph convolutional network topology for skeleton-based action recognition.
+The [ProgressiveSpatioTemporalGCNLearner](/src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py) class has the following public methods:
#### `ProgressiveSpatioTemporalGCNLearner` constructor
@@ -428,67 +435,73 @@ ProgressiveSpatioTemporalGCNLearner(self, lr, batch_size, optimizer_name, lr_sch
checkpoint_after_iter, checkpoint_load_iter, temp_path,
device, num_workers, epochs, experiment_name,
device_ind, val_batch_size, drop_after_epoch,
- start_epoch, dataset_name,
+ start_epoch, dataset_name,
blocksize, numblocks, numlayers, topology,
layer_threshold, block_threshold)
```
Constructor parameters:
-- **lr**: *float, default=0.1*
+
+- **lr**: *float, default=0.1*\
Specifies the initial learning rate to be used during training.
-- **batch_size**: *int, default=128*
- Specifies number of skeleton sequences to be bundled up in a batch during training. This heavily affects memory usage, adjust according to your system.
-- **optimizer_name**: *str {'sgd', 'adam'}, default='sgd'*
+- **batch_size**: *int, default=128*\
+ Specifies number of skeleton sequences to be bundled up in a batch during training.
+ This heavily affects memory usage, adjust according to your system.
+- **optimizer_name**: *str {'sgd', 'adam'}, default='sgd'*\
Specifies the optimizer type that should be used.
-- **lr_schedule**: *str, default=' '*
+- **lr_schedule**: *str, default=' '*\
Specifies the learning rate scheduler.
-- **checkpoint_after_iter**: *int, default=0*
- Specifies per how many training iterations a checkpoint should be saved. If it is set to 0 no checkpoints will be saved.
-- **checkpoint_load_iter**: *int, default=0*
- Specifies which checkpoint should be loaded. If it is set to 0, no checkpoints will be loaded.
-- **temp_path**: *str, default=''*
+- **checkpoint_after_iter**: *int, default=0*\
+ Specifies per how many training iterations a checkpoint should be saved.
+ If it is set to 0 no checkpoints will be saved.
+- **checkpoint_load_iter**: *int, default=0*\
+ Specifies which checkpoint should be loaded.
+ If it is set to 0, no checkpoints will be loaded.
+- **temp_path**: *str, default=''*\
Specifies a path where the algorithm saves the checkpoints and onnx optimized model (if needed).
-- **device**: *{'cpu', 'cuda'}, default='cuda'*
+- **device**: *{'cpu', 'cuda'}, default='cuda'*\
Specifies the device to be used.
-- **num_workers**: *int, default=32*
+- **num_workers**: *int, default=32*\
Specifies the number of workers to be used by the data loader.
-- **epochs**: *int, default=50*
+- **epochs**: *int, default=50*\
Specifies the number of epochs the training should run for.
-- **experiment_name**: *str, default='stgcn_nturgbd'*
+- **experiment_name**: *str, default='stgcn_nturgbd'*
String name to attach to checkpoints.
-- **device_ind**: *list, default=[0]*
- List of GPU indices to be used if the device is 'cuda'.
-- **val_batch_size**: *int, default=256*
- Specifies number of skeleton sequences to be bundled up in a batch during evaluation. This heavily affects memory usage, adjust according to your system.
-- **drop_after_epoch**: *list, default=[30,40]*
- List of epoch numbers in which the optimizer drops the learning rate.
-- **start_epoch**: *int, default=0*
- Specifies the starting epoch number for training.
-- **dataset_name**: *str {'kinetics', 'nturgbd_cv', 'nturgbd_cs'}, default='nturgbd_cv'*
- Specifies the name of dataset that is used for training and evaluation.
-- **num_class**: *int, default=60*
- Specifies the number of classes for the action dataset.
-- **num_point**: *int, default=25*
- Specifies the number of body joints in each skeleton.
-- **num_person**: *int, default=2*
+- **device_ind**: *list, default=[0]*\
+ List of GPU indices to be used if the device is 'cuda'.
+- **val_batch_size**: *int, default=256*\
+ Specifies number of skeleton sequences to be bundled up in a batch during evaluation.
+ This heavily affects memory usage, adjust according to your system.
+- **drop_after_epoch**: *list, default=[30,40]*\
+ List of epoch numbers in which the optimizer drops the learning rate.
+- **start_epoch**: *int, default=0*\
+ Specifies the starting epoch number for training.
+- **dataset_name**: *str {'kinetics', 'nturgbd_cv', 'nturgbd_cs'}, default='nturgbd_cv'*\
+ Specifies the name of dataset that is used for training and evaluation.
+- **num_class**: *int, default=60*\
+ Specifies the number of classes for the action dataset.
+- **num_point**: *int, default=25*\
+ Specifies the number of body joints in each skeleton.
+- **num_person**: *int, default=2*\
Specifies the number of body skeletons in each frame.
-- **in_channels**: *int, default=3*
- Specifies the number of input channels for each body joint.
-- **graph_type**: *str {'kinetics', 'ntu'}, default='ntu'*
- Specifies the type of graph structure associated with the dataset.
-- **block_size**: *int, default=20*
- Specifies the number of output channels (or neurons) that are added to each layer of the network at each progression iteration.
-- **numblocks**: *int, default=10*
- Specifies the maximum number of blocks that are added to each layer of the network at each progression iteration.
-- **numlayers**: *int, default=10*
+- **in_channels**: *int, default=3*\
+ Specifies the number of input channels for each body joint.
+- **graph_type**: *str {'kinetics', 'ntu'}, default='ntu'*\
+ Specifies the type of graph structure associated with the dataset.
+- **block_size**: *int, default=20*\
+ Specifies the number of output channels (or neurons) that are added to each layer of the network at each progression iteration.
+- **numblocks**: *int, default=10*\
+ Specifies the maximum number of blocks that are added to each layer of the network at each progression iteration.
+- **numlayers**: *int, default=10*\
Specifies the maximum number of layers that are built for the network.
-- **topology**: *list, default=[]*
- Specifies the initial topology of the network. The default is set to [], since the method gets an empty network as input and builds it progressively.
-- **layer_threshold**: *float, default=1e-4*
- Specifies the threshold which is used by the method to identify when it should stop adding new layers.
-- **block_threshold**: *float, default=1e-4*
- Specifies the threshold which is used by the model to identify when it should stop adding new blocks in each layer.
-
+- **topology**: *list, default=[]*\
+ Specifies the initial topology of the network.
+ The default is set to [], since the method gets an empty network as input and builds it progressively.
+- **layer_threshold**: *float, default=1e-4*\
+ Specifies the threshold which is used by the method to identify when it should stop adding new layers.
+- **block_threshold**: *float, default=1e-4*\
+ Specifies the threshold which is used by the model to identify when it should stop adding new blocks in each layer.
+
#### `ProgressiveSpatioTemporalGCNLearner.fit`
```python
@@ -499,41 +512,43 @@ ProgressiveSpatioTemporalGCNLearner.fit(self, dataset, val_dataset, logging_path
```
This method is used for training the algorithm on a train dataset and validating on a val dataset.
+
Parameters:
-- **dataset**: *object*
+
+- **dataset**: *object*\
Object that holds the training dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
-- **val_dataset**: *object*
- Object that holds the validation dataset.
-- **logging_path**: *str, default=''*
+- **val_dataset**: *object*\
+ Object that holds the validation dataset.
+- **logging_path**: *str, default=''*\
Path to save TensorBoard log files and the training log files.
- If set to None or '', TensorBoard logging is disabled and no log file is created.
-- **silent**: *bool, default=False*
+ If set to None or '', TensorBoard logging is disabled and no log file is created.
+- **silent**: *bool, default=False*\
If set to True, disables all printing of training progress reports and other information to STDOUT.
-- **verbose**: *bool, default=True***
+- **verbose**: *bool, default=True*\
If set to True, enables the maximum verbosity.
-- **momentum**: *float, default=0.9*
- Specifies the momentum value for optimizer.
-- **nesterov**: *bool, default=True***
- If set to true, the optimizer uses Nesterov's momentum.
-- **weight_decay**: *float, default=0.0001***
- Specifies the weight_decay value of the optimizer.
-- **train_data_filename**: *str, default='train_joints.npy'*
- Filename that contains the training data.
+- **momentum**: *float, default=0.9*\
+ Specifies the momentum value for optimizer.
+- **nesterov**: *bool, default=True*\
+ If set to true, the optimizer uses Nesterov's momentum.
+- **weight_decay**: *float, default=0.0001*\
+ Specifies the weight_decay value of the optimizer.
+- **train_data_filename**: *str, default='train_joints.npy'*\
+ Filename that contains the training data.
This file should be contained in the dataset path provided.
Note that this is a file name, not a path.
-- **train_labels_filename**: *str, default='train_labels.pkl'*
- Filename of the labels .pkl file.
+- **train_labels_filename**: *str, default='train_labels.pkl'*\
+ Filename of the labels .pkl file.
This file should be contained in the dataset path provided.
-- **val_data_filename**: *str, default='val_joints.npy'*
+- **val_data_filename**: *str, default='val_joints.npy'*\
Filename that contains the validation data.
This file should be contained in the dataset path provided.
Note that this is a filename, not a path.
-- **val_labels_filename**: *str, default='val_labels.pkl'*
+- **val_labels_filename**: *str, default='val_labels.pkl'*\
Filename of the validation labels .pkl file.
This file should be contained in the dataset path provided.
-- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*
- The data stream that should be used for training and evaluation.
+- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*\
+ The data stream that should be used for training and evaluation.
#### `ProgressiveSpatioTemporalGCNLearner.eval`
@@ -544,112 +559,113 @@ ProgressiveSpatioTemporalGCNLearner.eval(self, val_dataset, val_loader, epoch, s
```
This method is used to evaluate a trained model on an evaluation dataset.
-Returns a dictionary containing stats regarding evaluation.
+Returns a dictionary containing stats regarding evaluation.
+
Parameters:
-- **val_dataset**: *object*
+- **val_dataset**: *object*\
Object that holds the evaluation dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
-- **val_loader**: *object, default=None*
+- **val_loader**: *object, default=None*\
Object that holds a Python iterable over the evaluation dataset.
Object of `torch.utils.data.DataLoader` class.
-- **epoch**: *int, default=0*
- The training epoch in which the model is evaluated.
-- **silent**: *bool, default=False*
+- **epoch**: *int, default=0*\
+ The training epoch in which the model is evaluated.
+- **silent**: *bool, default=False*\
If set to True, disables all printing of evaluation progress reports and other information to STDOUT.
-- **verbose**: *bool, default=True*
+- **verbose**: *bool, default=True*\
If set to True, enables the maximum verbosity.
-- **val_data_filename**: *str, default='val_joints.npy'*
+- **val_data_filename**: *str, default='val_joints.npy'*\
Filename that contains the validation data.
This file should be contained in the dataset path provided.
Note that this is a filename, not a path.
-- **val_labels_filename**: *str, default='val_labels.pkl'*
+- **val_labels_filename**: *str, default='val_labels.pkl'*\
Filename of the validation labels .pkl file.
This file should be contained in the dataset path provided.
-- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*
- The data stream that should be used for training and evaluation.
-- **save_score**: *bool, default=False*
- If set to True, it saves the classification score of all samples in differenc classes
- in a log file. Default to False.
-- **wrong_file**: *str, default=None*
- If set to True, it saves the results of wrongly classified samples. Default to False.
-- **result_file**: *str, default=None*
- If set to True, it saves the classification results of all samples. Default to False.
-- **show_topk**: *list, default=[1, 5]*
- Is set to a list of integer numbers defining the k in top-k accuracy. Default is set to [1,5].
+- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*\
+ The data stream that should be used for training and evaluation.
+- **save_score**: *bool, default=False*\
+ If set to True, it saves the classification score of all samples in different classes in a log file.
+- **wrong_file**: *str, default=None*\
+ If set to True, it saves the results of wrongly classified samples.
+- **result_file**: *str, default=None*\
+ If set to True, it saves the classification results of all samples.
+- **show_topk**: *list, default=[1, 5]*\
+ Is set to a list of integer numbers defining the k in top-k accuracy.
#### `ProgressiveSpatioTemporalGCNLearner.init_model`
```python
ProgressiveSpatioTemporalGCNLearner.init_model(self)
```
-This method is used to initialize the imported model and its loss function.
-
-
+This method is used to initialize the imported model and its loss function.
+
+
#### `ProgressiveSpatioTemporalGCNLearner.network_builder`
```python
ProgressiveSpatioTemporalGCNLearner.network_builder(self, dataset, val_dataset, train_data_filename,
train_labels_filename, val_data_filename,
val_labels_filename, skeleton_data_type, verbose)
```
-This method implement the ST-GCN Augmentation Module (ST-GCN-AM) which builds the network topology progressively.
+This method implement the ST-GCN Augmentation Module (ST-GCN-AM) which builds the network topology progressively.
+
Parameters:
-- **dataset**: *object*
+- **dataset**: *object*\
Object that holds the training dataset.
-- **val_dataset**: *object*
+- **val_dataset**: *object*\
Object that holds the evaluation dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
-- **train_data_filename**: *str, default='train_joints.npy'*
- Filename that contains the training data.
+- **train_data_filename**: *str, default='train_joints.npy'*\
+ Filename that contains the training data.
This file should be contained in the dataset path provided.
Note that this is a file name, not a path.
-- **train_labels_filename**: *str, default='train_labels.pkl'*
- Filename of the labels .pkl file.
+- **train_labels_filename**: *str, default='train_labels.pkl'*\
+ Filename of the labels .pkl file.
This file should be contained in the dataset path provided.
-- **val_data_filename**: *str, default='val_joints.npy'*
+- **val_data_filename**: *str, default='val_joints.npy'*\
Filename that contains the validation data.
This file should be contained in the dataset path provided.
Note that this is a filename, not a path.
-- **val_labels_filename**: *str, default='val_labels.pkl'*
+- **val_labels_filename**: *str, default='val_labels.pkl'*\
Filename of the validation labels .pkl file.
This file should be contained in the dataset path provided.
-- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*
- The data stream that should be used for training and evaluation.
-- **verbose**: *bool, default=True***
+- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*\
+ The data stream that should be used for training and evaluation.
+- **verbose**: *bool, default=True*\
Whether to print messages in the console.
-
-
+
+
#### `ProgressiveSpatioTemporalGCNLearner.infer`
```python
ProgressiveSpatioTemporalGCNLearner.infer(self, SkeletonSeq_batch)
```
-This method is used to perform action recognition on a sequence of skeletons.
-It returns the action category as an object of `engine.target.Category` if a proper input object `engine.data.SkeletonSequence` is given.
+This method is used to perform action recognition on a sequence of skeletons.
+It returns the action category as an object of `engine.target.Category` if a proper input object `engine.data.SkeletonSequence` is given.
Parameters:
-- **SkeletonSeq_batch**: *object***
+
+- **SkeletonSeq_batch**: *object*\
Object of type engine.data.SkeletonSequence.
#### `ProgressiveSpatioTemporalGCNLearner.save`
```python
ProgressiveSpatioTemporalGCNLearner.save(self, path, model_name, verbose)
```
+
This method is used to save a trained model.
-Provided with the path "/my/path" (absolute or relative), it creates the "path" directory, if it does not already
-exist. Inside this folder, the model is saved as "model_name.pt" and the metadata file as "model_name.json". If the directory
-already exists, the "model_name.pt" and "model_name.json" files are overwritten.
+Provided with the path "/my/path" (absolute or relative), it creates the "path" directory, if it does not already exist.
+Inside this folder, the model is saved as "model_name.pt" and the metadata file as "model_name.json". If the directory already exists, the "model_name.pt" and "model_name.json" files are overwritten.
-If [`self.optimize`](/src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py#L576) was run previously, it saves the optimized ONNX model in
-a similar fashion with an ".onnx" extension, by copying it from the self.temp_path it was saved previously
-during conversion.
+If [`self.optimize`](/src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py#L576) was run previously, it saves the optimized ONNX model in a similar fashion with an ".onnx" extension, by copying it from the self.temp_path it was saved previously during conversion.
Parameters:
-- **path**: *str*
+
+- **path**: *str*\
Path to save the model.
-- **model_name**: *str*
- The file name to be saved.
-- **verbose**: *bool, default=False*
+- **model_name**: *str*\
+ The file name to be saved.
+- **verbose**: *bool, default=False*\
If set to True, prints a message on success.
#### `ProgressiveSpatioTemporalGCNLearner.load`
@@ -661,11 +677,12 @@ This method is used to load a previously saved model from its saved folder.
Loads the model from inside the directory of the path provided, using the metadata .json file included.
Parameters:
-- **path**: *str*
+
+- **path**: *str*\
Path of the model to be loaded.
-- **model_name**: *str*
- The file name to be loaded.
-- **verbose**: *bool, default=False*
+- **model_name**: *str*\
+ The file name to be loaded.
+- **verbose**: *bool, default=False*\
If set to True, prints a message on success.
@@ -677,7 +694,8 @@ ProgressiveSpatioTemporalGCNLearner.optimize(self, do_constant_folding)
This method is used to optimize a trained model to ONNX format which can be then used for inference.
Parameters:
-- **do_constant_folding**: *bool, default=False*
+
+- **do_constant_folding**: *bool, default=False*\
ONNX format optimization.
If True, the constant-folding optimization is applied to the model during export.
Constant-folding optimization will replace some of the operations that have all constant inputs, with pre-computed constant nodes.
@@ -689,27 +707,28 @@ ProgressiveSpatioTemporalGCNLearner.multi_stream_eval(self, dataset, scores, dat
labels_filename, skeleton_data_type,
verbose, silent)
```
-This method is used to ensemble the classification results of the model on two or more data streams like joints, bones and motions.
-It returns the top-k classification performance of ensembled model.
+This method is used to ensemble the classification results of the model on two or more data streams like joints, bones and motions.
+It returns the top-k classification performance of ensembled model.
Parameters:
-- **dataset**: *object*
+
+- **dataset**: *object*\
Object that holds the dataset.
Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
-- **score**: *list*
+- **score**: *list*\
A list of score arrays. Each array in the list contains the evaluation results for a data stream.
-- **data_filename**: *str, default='val_joints.npy'*
+- **data_filename**: *str, default='val_joints.npy'*\
Filename that contains the validation data.
This file should be contained in the dataset path provided.
Note that this is a filename, not a path.
-- **labels_filename**: *str, default='val_labels.pkl'*
+- **labels_filename**: *str, default='val_labels.pkl'*\
Filename of the validation labels .pkl file.
This file should be contained in the dataset path provided.
-- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*
- The data stream that should be used for training and evaluation.
-- **silent**: *bool, default=False*
+- **skeleton_data_type**: *str {'joint', 'bone', 'motion'}, default='joint'*\
+ The data stream that should be used for training and evaluation.
+- **silent**: *bool, default=False*\
If set to True, disables all printing of evaluation progress reports and other information to STDOUT.
-- **verbose**: *bool, default=True*
+- **verbose**: *bool, default=True*\
If set to True, enables the maximum verbosity.
@@ -719,27 +738,238 @@ Parameters:
ProgressiveSpatioTemporalGCNLearner.download(self, path, mode, verbose, url, file_name)
```
-Download utility for various skeleton-based action recognition components. Downloads files depending on mode and
-saves them in the path provided. It supports downloading:
-1. the pretrained weights for stgcn, tagcn and stbln models.
-2. a dataset containing one or more skeleton sequences and its labels.
+Download utility for various skeleton-based action recognition components.
+Downloads files depending on mode and saves them in the path provided.
+It supports downloading:
+1. the pretrained weights for stgcn, tagcn and stbln models.
+2. a dataset containing one or more skeleton sequences and its labels.
Parameters:
-- **path**: *str, default=None*
+
+- **path**: *str, default=None*\
Local path to save the files, defaults to self.parent_dir if None.
-- **mode**: *str, default="pretrained"*
+- **mode**: *str, default="pretrained"*\
What file to download, can be one of "pretrained", "train_data", "val_data", "test_data"
-- **verbose**: *bool, default=False*
+- **verbose**: *bool, default=False*\
Whether to print messages in the console.
-- **url**: *str, default=OpenDR FTP URL*
+- **url**: *str, default=OpenDR FTP URL*\
URL of the FTP server.
-- **file_name**: *str*
- The name of the file containing the pretrained model.
+- **file_name**: *str*\
+ The name of the file containing the pretrained model.
+
+
+### Class CoSTGCNLearner
+Bases: `engine.learners.Learner`
+
+The *CoSTGCNLearner* class is an implementation of the proposed method CoSTGCN [[8]](#8) for Continual-Skeleton-based Human Action Recognition.
+It performs skeleton-based action recognition continuously in a frame-wise manner.
+The [CoSTGCNLearner](/src/opendr/perception/skeleton_based_action_recognition/continual_stgcn_learner.py) class has the following public methods:
+
+
+#### `CoSTGCNLearner` constructor
+```python
+CoSTGCNLearner(self, lr, iters, batch_size, optimizer, lr_schedule, backbone, network_head,
+ checkpoint_after_iter, checkpoint_load_iter, temp_path,
+ device, loss, weight_decay, momentum, drop_last, pin_memory, num_workers, seed,
+ num_classes, num_point, num_person, in_channels, graph_type, sequence_len
+ )
+```
+
+Constructor parameters:
+
+- **lr**: *float, default=0.001*\
+ Specifies the learning rate to be used during training.
+- **iters**: *int, default=10*\
+ Number of epochs to train for.
+- **batch_size**: *int, default=64*\
+ Specifies number of skeleton sequences to be bundled up in a batch during training.
+ This heavily affects memory usage, adjust according to your system.
+- **optimizer**: *str {'sgd', 'adam'}, default='adam'*\
+ Name of optimizer to use ("sgd" or "adam").
+- **lr_schedule**: *str, default=''*
+ Specifies the learning rate scheduler.
+- **network_head**: *str, default='classification'*\
+ Head of network (only "classification" is currently available).
+- **checkpoint_after_iter**: *int, default=0*\
+ Unused parameter.
+- **checkpoint_load_iter**: *int, default=0*\
+ Unused parameter.
+- **temp_path**: *str, default=''*\
+ Path in which to store temporary files.
+- **device**: *{'cpu', 'cuda'}, default='cuda'*\
+ Specifies the device to be used.
+- **loss**: *str, default="cross_entropy"*\
+ Name of loss in torch.nn.functional to use. Defaults to "cross_entropy".
+- **weight_decay**: *float, default=1e-5*\
+ Weight decay used for optimization. Defaults to 1e-5.
+- **momentum**: *float, default=0.9*\
+ Momentum used for optimization. Defaults to 0.9.
+- **drop_last**: *bool, default=True*\
+ Drop last data point if a batch cannot be filled. Defaults to True.
+- **pin_memory**: *bool, default=False*\
+ Pin memory in dataloader. Defaults to False.
+- **num_workers**: *int, default=0*\
+ Specifies the number of workers to be used by the data loader.
+- **seed**: *int, default=123*\
+ Random seed. Defaults to 123.
+- **num_classes**: *int, default=60*\
+ Specifies the number of classes for the action dataset.
+- **num_point**: *int, default=25*\
+ Specifies the number of body joints in each skeleton.
+- **num_person**: *int, default=2*\
+ Specifies the number of body skeletons in each frame.
+- **in_channels**: *int, default=3*\
+ Specifies the number of input channels for each body joint.
+- **graph_type**: *str {'ntu', 'openpose'}, default='ntu'*\
+ Specifies the type of graph structure associated with the dataset.
+- **sequence_len** *int, default=300*\
+ Size of the final global average pooling. Defaults to 300.
+
+#### `CoSTGCNLearner.fit`
+```python
+CoSTGCNLearner.fit(self, dataset, val_dataset, epochs, steps)
+```
+
+This method is used for training the algorithm on a train dataset and validating on a val dataset.
+
+Parameters:
+
+- **dataset**: *object*\
+ Object that holds the training dataset.
+ Can be of type `ExternalDataset` or a custom dataset inheriting from `DatasetIterator`.
+- **val_dataset**: *object*\
+ Object that holds the validation dataset.
+- **epochs**: *int, default=None*\
+ Number of epochs.
+ If none is supplied, self.iters will be used.
+- **steps**: *int, default=None*\
+ Number of training steps to conduct.
+ If none, this is determined by epochs.
+
+
+#### `CoSTGCNLearner.eval`
+```python
+CoSTGCNLearner.eval(self, dataset, steps)
+```
+
+This method is used to evaluate a trained model on an evaluation dataset.
+Returns a dictionary containing stats regarding evaluation.
+
+Parameters:
+
+- **dataset**: *object*\
+ Dataset on which to evaluate model
+- **steps**: *int, default=None*\
+ Number of validation batches to evaluate.
+ If None, all batches are evaluated.
+
+
+#### `CoSTGCNLearner.init_model`
+```python
+CoSTGCNLearner.init_model(self)
+```
+This method is used to initialize model with random parameters
+
+#### `ProgressiveSpatioTemporalGCNLearner.infer`
+```python
+ProgressiveSpatioTemporalGCNLearner.infer(self, batch)
+```
+
+This method is used to perform action recognition on a sequence of skeletons.
+It returns the action category as an object of `engine.target.Category` if a proper input object `engine.data.SkeletonSequence` is given.
+
+Parameters:
+
+- **batch**: *object*\
+ Object of type engine.data.SkeletonSequence.
+
+#### `CoSTGCNLearner.save`
+```python
+CoSTGCNLearner.save(self, path)
+```
+
+This method is used to save model weights and metadata to path.
+
+Parameters:
+
+- **path**: *str*\
+ Directory in which to save model weights and meta data.
+
+
+#### `CoSTGCNLearner.load`
+```python
+CoSTGCNLearner.load(self, path)
+```
+
+This method is used to load a previously saved model from its saved folder.
+Loads the model from inside the directory of the path provided, using the metadata .json file included.
+
+Parameters:
+
+- **path**: *str*\
+ Path to metadata file in json format or path to model weights.
+
+
+#### `CoSTGCNLearner.optimize`
+```python
+CoSTGCNLearner.optimize(self, do_constant_folding)
+```
+
+This method is used to optimize a trained model to ONNX format which can be then used for inference.
+
+Parameters:
+
+- **do_constant_folding**: *bool, default=False*\
+ ONNX format optimization.
+ If True, the constant-folding optimization is applied to the model during export.
+
+
+#### `CoSTGCNLearner.download`
+```python
+@staticmethod
+CoSTGCNLearner.download(self, dataset_name, experiment_name, path, method_name, mode, verbose, url, file_name)
+```
+
+Downloads files depending on mode and saves them in the path provided.
+It supports downloading:
+1. the pretrained weights for stgcn model.
+2. a small sample dataset and its labels.
+
+Parameters:
+
+- **dataset_name**: *str, default='nturgbd_cv'*\
+ The name of dataset that should be downloaded.
+- **experiment_name**: *str, default='stgcn_nturgbd'*\
+ The name of experiment for which the pretrained model is saved.
+- **path**: *str, default=None*\
+ Local path to save the files, defaults to self.parent_dir if None.
+- **mode**: *str, default="pretrained"*\
+ What file to download, can be one of "pretrained", "train_data", "val_data", "test_data"
+- **verbose**: *bool, default=False*\
+ Whether to print messages in the console.
+- **url**: *str, default=OpenDR FTP URL*\
+ URL of the FTP server.
+- **file_name**: *str, default="costgcn_ntu60_xview_joint.ckpt"*\
+ The name of the file containing the pretrained model.
+
+#### `CoSTGCNLearner.infer`
+```python
+CoSTGCNLearner.infer(self, batch)
+```
+
+This method is used to perform inference on a batch of data.
+It returns a list of output categories
+
+Parameters:
+
+- **batch**: *object*\
+ Batch of skeletons for a single time-step.
+ The batch should have shape (C, V, S), (C, T, V, S), or (B, C, T, V, S). Here, B is the batch size, C is the number of input channels, V is the number of vertices, and S is the number of skeletons
#### Examples
-* **Finding an optimized spatio-temporal GCN architecture based on training dataset defined as an `ExternalDataset`**.
+* **Finding an optimized spatio-temporal GCN architecture based on training dataset defined as an `ExternalDataset`**.
The training and evaluation dataset should be present in the path provided, along with the labels file.
The `batch_size` argument should be adjusted according to available memory.
@@ -748,21 +978,21 @@ Parameters:
from opendr.engine.datasets import ExternalDataset
training_dataset = ExternalDataset(path='./data/preprocessed_nturgbd/xview', dataset_type='NTURGBD')
validation_dataset = ExternalDataset(path='./data/preprocessed_nturgbd/xview', dataset_type='NTURGBD')
-
+
pstgcn_learner = ProgressiveSpatioTemporalGCNLearner(temp_path='./parent_dir',
batch_size=64, epochs=65,
checkpoint_after_iter=10, val_batch_size=128,
dataset_name='nturgbd_cv', experiment_name='pstgcn_nturgbd',
blocksize=20, numblocks=1, numlayers=1, topology=[],
layer_threshold=1e-4, block_threshold=1e-4)
-
+
pstgcn_learner.network_builder(dataset=training_dataset, val_dataset=validation_dataset,
train_data_filename='train_joints.npy',
train_labels_filename='train_labels.pkl',
val_data_filename="val_joints.npy",
val_labels_filename="val_labels.pkl",
skeleton_data_type='joint')
-
+
pstgcn_learner.save(path='./saved_models/pstgcn_nturgbd_cv_checkpoints', model_name='test_pstgcn')
```
@@ -776,19 +1006,19 @@ Parameters:
dataset_name='nturgbd_cv', experiment_name='pstgcn_nturgbd',
blocksize=20, numblocks=1, numlayers=1, topology=[],
layer_threshold=1e-4, block_threshold=1e-4)
-
+
# Download the default pretrained pstgcn model in the parent_dir
pstgcn_learner.download(
- mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_pstgcn')
-
+ mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_pstgcn')
+
pstgcn_learner.load('./parent_dir/pretrained_models', model_name='pretrained_stgcn')
- test_data_path = pstgcn_learner.download(mode="test_data") # Download a test data
+ test_data_path = pstgcn_learner.download(mode="test_data") # Download a test data
test_data = numpy.load(test_data_path)
action_category = pstgcn_learner.infer(test_data)
-
+
```
-
-* **Optimization example for a previously trained model.**
+
+* **Optimization example for a previously trained model**
Inference can be run with the trained model after running self.optimize.
```python
from opendr.perception.skeleton_based_action_recognition.progressive_spatio_temporal_gcn_learner import ProgressiveSpatioTemporalGCNLearner
@@ -800,8 +1030,8 @@ Parameters:
blocksize=20, numblocks=1, numlayers=1, topology=[],
layer_threshold=1e-4, block_threshold=1e-4)
pstgcn_learner.download(
- mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_pstgcn')
-
+ mode="pretrained", path='./parent_dir/pretrained_models', file_name='pretrained_pstgcn')
+
pstgcn_learner.load(path='./parent_dir/pretrained_models', file_name='pretrained_pstgcn')
pstgcn_learner.optimize(do_constant_folding=True)
pstgcn_learner.save(path='./parent_dir/optimized_model', model_name='optimized_pstgcn')
@@ -817,44 +1047,55 @@ The tests were conducted on the following computational devices:
- Nvidia Jetson Xavier AGX
- Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors
-
Inference time is measured as the time taken to transfer the input to the model (e.g., from CPU to GPU), run inference using the algorithm, and return results to CPU.
-The ST-GCN, TAGCN and ST-BLN models are implemented in *SpatioTemporalGCNLearner* and the PST-GCN model is implemented in *ProgressiveSpatioTemporalGCNLearner*.
+The ST-GCN, TAGCN and ST-BLN models are implemented in *SpatioTemporalGCNLearner* and the PST-GCN model is implemented in *ProgressiveSpatioTemporalGCNLearner*.
Note that the models receive each input sample as a sequence of 300 skeletons, and the pose estimation process is not involved in this benchmarking.
The skeletal data is from NTU-RGBD dataset. We report speed (single sample per inference) as the mean of 100 runs.
The noted memory is the maximum allocated memory on GPU during inference.
The performance evaluation results of the *SpatioTemporalGCNLearner* and *ProgressiveSpatioTemporalGCNLearner* in terms of prediction accuracy on NTU-RGBD-60, parameter count and maximum allocated memory are reported in the following Tables.
-The performance of TA-GCN is reported when it selects 100 frames out of 300 (T=100). PST-GCN finds different architectures for two different dataset settings (CV and CS) which leads to different classification accuracy, number of parameters and memory allocation.
-
-| Method | Acc. (%) | Params (M) | Mem. (MB) |
-|-------------------|----------|------------|-----------|
-| ST-GCN | 88.3 | 3.12 | 47.37 |
-| TA-GCN (T=100) | 94.2 | 2.24 | 42.65 |
-| ST-BLN | 93.8 | 5.3 | 55.77 |
-| PST-GCN (CV) | 94.33 | 0.63 | 31.65 |
-| PST-GCN (CS) | 87.9 | 0.92 | 32.2 |
+The performance of TA-GCN is reported when it selects 100 frames out of 300 (T=100). PST-GCN finds different architectures for two different dataset settings (CV and CS) which leads to different classification accuracy, number of parameters and memory allocation.
+
+| Method | Acc. (%) | Params (M) | Mem. (MB) |
+|----------------|----------|------------|-----------|
+| ST-GCN | 88.3 | 3.12 | 47.37 |
+| TA-GCN (T=100) | 94.2 | 2.24 | 42.65 |
+| ST-BLN | 93.8 | 5.3 | 55.77 |
+| PST-GCN (CV) | 94.33 | 0.63 | 31.65 |
+| PST-GCN (CS) | 87.9 | 0.92 | 32.2 |
+| CoST-GCN (CV) | 93.8 | 3.1 | 36.1 |
+| CoST-GCN (CS) | 86.3 | 3.1 | 36.1 |
+| CoA-GCN (CV) | 92.6 | 3.5 | 37.4 |
+| CoA-GCN (CS) | 84.1 | 3.5 | 37.4 |
+| CoS-TR (CV) | 92.4 | 3.1 | 36.1 |
+| CoS-TR (CS) | 86.3 | 3.1 | 36.1 |
The inference speed (evaluations/second) of both learners on various computational devices are as follows:
-| Method | CPU | Jetson TX2 | Jetson Xavier | RTX 2080 Ti |
+| Method | CPU | Jetson TX2 | Jetson Xavier | RTX 2080 Ti |
|----------------|-------|------------|---------------|-------------|
-| ST-GCN | 13.26 | 4.89 | 15.27 | 63.32 |
-| TA-GCN (T=100) | 20.47 | 10.6 | 25.43 | 93.33 |
+| ST-GCN | 13.26 | 4.89 | 15.27 | 63.32 |
+| TA-GCN (T=100) | 20.47 | 10.6 | 25.43 | 93.33 |
| ST-BLN | 7.69 | 3.57 | 12.56 | 55.98 |
-| PST-GCN (CV) | 15.38 | 6.57 | 20.25 | 83.10 |
-| PST-GCN (CS) | 13.07 | 5.53 | 19.41 | 77.57 |
-
-Energy (Joules) of both learners’ inference on embedded devices is shown in the following:
-
-| Method | Jetson TX2 | Jetson Xavier |
-|-------------------|-------------|----------------|
-| ST-GCN | 6.07 | 1.38 |
-| TA-GCN (T=100) | 2.23 | 0.59 |
-| ST-BLN | 9.26 | 2.01 |
-| PST-GCN (CV) | 4.13 | 1.00 |
-| PST-GCN (CS) | 5.54 | 1.12 |
+| PST-GCN (CV) | 15.38 | 6.57 | 20.25 | 83.10 |
+| PST-GCN (CS) | 13.07 | 5.53 | 19.41 | 77.57 |
+| CoST-GCN | 34.26 | 11.22 | 20.91 | - |
+| CoA-GCN | 23.09 | 7.24 | 15.28 | - |
+| CoS-TR | 30.12 | 10.49 | 20.87 | - |
+
+Energy (Joules) of both learners’ inference on embedded devices is shown in the following:
+
+| Method | Jetson TX2 | Jetson Xavier |
+|----------------|------------|---------------|
+| ST-GCN | 6.07 | 1.38 |
+| TA-GCN (T=100) | 2.23 | 0.59 |
+| ST-BLN | 9.26 | 2.01 |
+| PST-GCN (CV) | 4.13 | 1.00 |
+| PST-GCN (CS) | 5.54 | 1.12 |
+| CoST-GCN | 1.95 | 0.57 |
+| CoA-GCN | 3.33 | 0.91 |
+| CoS-TR | 2.28 | 0.55 |
The platform compatibility evaluation is also reported below:
@@ -871,31 +1112,35 @@ The platform compatibility evaluation is also reported below:
## References
-[1]
-[Yan, S., Xiong, Y., & Lin, D. (2018, April). Spatial temporal graph convolutional networks for skeleton-based action
+[1]
+[Yan, S., Xiong, Y., & Lin, D. (2018, April). Spatial temporal graph convolutional networks for skeleton-based action
recognition. In Proceedings of the AAAI conference on artificial intelligence (Vol. 32, No. 1).](
https://arxiv.org/abs/1609.02907)
-[2]
+[2]
[Heidari, Negar, and Alexandros Iosifidis. "Temporal attention-augmented graph convolutional network for efficient skeleton-based human action recognition." 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021.](https://ieeexplore.ieee.org/abstract/document/9412091)
-[3]
-[Heidari, N., & Iosifidis, A. (2020). On the spatial attention in Spatio-Temporal Graph Convolutional Networks for
+[3]
+[Heidari, N., & Iosifidis, A. (2020). On the spatial attention in Spatio-Temporal Graph Convolutional Networks for
skeleton-based human action recognition. arXiv preprint arXiv: 2011.03833.](https://arxiv.org/abs/2011.03833)
-[4]
+[4]
[Heidari, Negar, and Alexandras Iosifidis. "Progressive Spatio-Temporal Graph Convolutional Network for Skeleton-Based Human Action Recognition." ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.](https://ieeexplore.ieee.org/abstract/document/9413860)
-[5]
+[5]
[Shahroudy, A., Liu, J., Ng, T. T., & Wang, G. (2016). Ntu rgb+ d: A large scale dataset for 3d human activity analysis.
In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1010-1019).](
https://openaccess.thecvf.com/content_cvpr_2016/html/Shahroudy_NTU_RGBD_A_CVPR_2016_paper.html)
[6]
-[Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., ... & Zisserman, A. (2017).
-The kinetics human action video dataset. arXiv preprint arXiv:1705.06950.](https://arxiv.org/pdf/1705.06950.pdf)
+[Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., ... & Zisserman, A. (2017).
+The kinetics human action video dataset. arXiv preprint arXiv:1705.06950.](https://arxiv.org/pdf/1705.06950.pdf)
[7]
-[Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity
+[Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity
fields. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7291-7299).](
https://openaccess.thecvf.com/content_cvpr_2017/html/Cao_Realtime_Multi-Person_2D_CVPR_2017_paper.html)
+
+[8]
+[Hedegaard, Lukas, Negar Heidari, and Alexandros Iosifidis. "Online Skeleton-based Action Recognition with Continual Spatio-Temporal Graph Convolutional Networks." arXiv preprint arXiv:2203.11009 (2022).](
+https://arxiv.org/abs/2203.11009)
\ No newline at end of file
diff --git a/docs/reference/smpld_models.md b/docs/reference/smpld_models.md
index b5c7418472..e0bba4dff7 100644
--- a/docs/reference/smpld_models.md
+++ b/docs/reference/smpld_models.md
@@ -6,10 +6,10 @@ This folder contains code for:
-
-
-
-
+
+
+
+
### Download the raw SMPL+D models only (≈12.5Gb)
diff --git a/packages.txt b/packages.txt
index 5c24f8a26e..a00820971f 100644
--- a/packages.txt
+++ b/packages.txt
@@ -6,7 +6,6 @@ perception/pose_estimation
perception/fall_detection
perception/compressive_learning
perception/heart_anomaly_detection
-simulation/human_model_generation
perception/multimodal_human_centric
perception/facial_expression_recognition
perception/activity_recognition
@@ -16,6 +15,7 @@ perception/object_tracking_2d
perception/object_detection_3d
perception/object_tracking_3d
perception/panoptic_segmentation
+simulation/human_model_generation
utils/hyperparameter_tuner
-control/single_demo_grasp
-opendr
+utils/ambiguity_measure
+opendr
\ No newline at end of file
diff --git a/projects/README.md b/projects/README.md
index 6cf05ca17a..d755cc6794 100644
--- a/projects/README.md
+++ b/projects/README.md
@@ -1,3 +1,8 @@
# Projects
-
This folder contains sample applications demonstrating the OpenDR toolkit functionalities.
+
+This includes:
+- [Python usage examples and tutorials](python)
+- [C_API usage examples](c_api)
+- [ROS 1 nodes](opendr_ws)
+- [ROS 2 nodes](opendr_ws_2)
diff --git a/projects/opendr_ws/README.md b/projects/opendr_ws/README.md
old mode 100755
new mode 100644
index 2985a9f062..31a6aba763
--- a/projects/opendr_ws/README.md
+++ b/projects/opendr_ws/README.md
@@ -1,59 +1,94 @@
# opendr_ws
## Description
-This ROS workspace contains ROS nodes and tools developed by OpenDR project. Currently, ROS nodes are compatible with ROS Noetic.
-This workspace contains the `ros_bridge` package, which provides message definitions for ROS-compatible OpenDR data types,
+This ROS workspace contains ROS nodes and tools developed by OpenDR project.
+Currently, ROS nodes are compatible with **ROS Melodic for Ubuntu 18.04** and **ROS Noetic for Ubuntu 20.04**.
+The instructions that follow target ROS Noetic, but can easily be modified for ROS Melodic by swapping out the version name.
+This workspace contains the `opendr_bridge` package, which provides message definitions for ROS-compatible OpenDR data types,
as well the `ROSBridge` class which provides an interface to convert OpenDR data types and targets into ROS-compatible
-ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/rosbridge.md).
-
-
-## Setup
-For running a minimal working example you can follow the instructions below:
-
-0. Source the necessary distribution tools:
-
- ```source /opt/ros/noetic/setup.bash```
-
-1. Make sure you are inside opendr_ws
-2. If you are planning to use a usb camera for the demos, install the corresponding package and its dependencies:
-
-```shell
-cd src
-git clone https://github.com/ros-drivers/usb_cam
-cd ..
-rosdep install --from-paths src/ --ignore-src
-```
-3. Install the following dependencies, required in order to use the OpenDR ROS tools:
-```shell
-sudo apt-get install ros-noetic-vision-msgs ros-noetic-geometry-msgs ros-noetic-sensor-msgs ros-noetic-audio-common-msgs
-```
-4. Build the packages inside workspace
-```shell
-catkin_make
-```
-5. Source the workspace and you are ready to go!
-```shell
-source devel/setup.bash
-```
+ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/opendr-ros-bridge.md).
+
+
+## First time setup
+For the initial setup you can follow the instructions below:
+
+0. Make sure ROS noetic is installed: http://wiki.ros.org/noetic/Installation/Ubuntu (desktop full install)
+
+1. Open a new terminal window and source the necessary distribution tools:
+ ```shell
+ source /opt/ros/noetic/setup.bash
+ ```
+ _For convenience, you can add this line to your `.bashrc` so you don't have to source the tools each time you open a terminal window._
+
+2. Navigate to your OpenDR home directory (`~/opendr`) and activate the OpenDR environment using:
+ ```shell
+ source bin/activate.sh
+ ```
+ You need to do this step every time before running an OpenDR node.
+
+3. Navigate into the OpenDR ROS workspace::
+ ```shell
+ cd projects/opendr_ws
+ ```
+
+4. Build the packages inside the workspace:
+ ```shell
+ catkin_make
+ ```
+
+5. Source the workspace:
+ ```shell
+ source devel/setup.bash
+ ```
+ You are now ready to run an OpenDR ROS node, in this terminal but first the ROS master node needs to be running
+
+6. Before continuing, you need to start the ROS master node by running:
+ ```shell
+ roscore &
+ ```
+ You can now run an OpenDR ROS node. More information below.
+
+#### After first time setup
+For running OpenDR nodes after you have completed the initial setup, you can skip step 0 from the list above.
+You can also skip building the workspace (step 4) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node.
+
+#### More information
+After completing the setup you can read more information on the [opendr perception package README](src/opendr_perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance.
+
+#### Node documentation
+You can also take a look at the list of tools [below](#structure) and click on the links to navigate directly to documentation for specific nodes with instructions on how to run and modify them.
+
+**For first time users we suggest reading the introductory sections (prerequisites and notes) first.**
+
## Structure
-Currently, apart from tools, opendr_ws contains the following ROS nodes:
-
-### [Perception](src/perception/README.md)
-1. Pose Estimation
-2. Fall Detection
-3. 2D Object Detection
-4. Face Detection
-5. Panoptic Segmentation
-6. Face Recognition
-7. Semantic Segmentation
-8. RGBD Hand Gesture Recognition
-9. Heart Anomaly Detection
-10. Video Human Activity Recognition
-11. Landmark-based Facial Expression Recognition
-12. Skeleton-based Human Action Recognition
-13. Speech Command Recognition
-14. Voxel Object Detection 3D
-15. AB3DMOT Object Tracking 3D
-16. FairMOT Object Tracking 2D
-17. Deep Sort Object Tracking 2D
+Currently, apart from tools, opendr_ws contains the following ROS nodes (categorized according to the input they receive):
+
+### [Perception](src/opendr_perception/README.md)
+## RGB input
+1. [Pose Estimation](src/opendr_perception/README.md#pose-estimation-ros-node)
+2. [Fall Detection](src/opendr_perception/README.md#fall-detection-ros-node)
+3. [Face Detection](src/opendr_perception/README.md#face-detection-ros-node)
+4. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node)
+5. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes)
+6. [2D Single Object Tracking](src/opendr_perception/README.md#2d-single-object-tracking-ros-node)
+7. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes)
+8. [Panoptic Segmentation](src/opendr_perception/README.md#panoptic-segmentation-ros-node)
+9. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node)
+10. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros-node)
+11. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node)
+12. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node)
+13. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node)
+## RGB + Infrared input
+1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros-node)
+## RGBD input
+1. [RGBD Hand Gesture Recognition](src/opendr_perception/README.md#rgbd-hand-gesture-recognition-ros-node)
+## RGB + Audio input
+1. [Audiovisual Emotion Recognition](src/opendr_perception/README.md#audiovisual-emotion-recognition-ros-node)
+## Audio input
+1. [Speech Command Recognition](src/opendr_perception/README.md#speech-command-recognition-ros-node)
+## Point cloud input
+1. [3D Object Detection Voxel](src/opendr_perception/README.md#3d-object-detection-voxel-ros-node)
+2. [3D Object Tracking AB3DMOT](src/opendr_perception/README.md#3d-object-tracking-ab3dmot-ros-node)
+## Biosignal input
+1. [Heart Anomaly Detection](src/opendr_perception/README.md#heart-anomaly-detection-ros-node)
diff --git a/projects/opendr_ws/images/opendr_node_diagram.png b/projects/opendr_ws/images/opendr_node_diagram.png
new file mode 100644
index 0000000000..6948a1f1b9
Binary files /dev/null and b/projects/opendr_ws/images/opendr_node_diagram.png differ
diff --git a/projects/opendr_ws/src/ros_bridge/CMakeLists.txt b/projects/opendr_ws/src/opendr_bridge/CMakeLists.txt
similarity index 75%
rename from projects/opendr_ws/src/ros_bridge/CMakeLists.txt
rename to projects/opendr_ws/src/opendr_bridge/CMakeLists.txt
index b7ed470ae0..6cad646562 100644
--- a/projects/opendr_ws/src/ros_bridge/CMakeLists.txt
+++ b/projects/opendr_ws/src/opendr_bridge/CMakeLists.txt
@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.0.2)
-project(ros_bridge)
+project(opendr_bridge)
find_package(catkin REQUIRED COMPONENTS
roscpp
@@ -14,6 +14,18 @@ catkin_python_setup()
################################################
## Declare ROS messages, services and actions ##
################################################
+add_message_files(
+ DIRECTORY msg
+ FILES
+ OpenDRPose2DKeypoint.msg
+ OpenDRPose2D.msg
+)
+
+ add_service_files(
+ DIRECTORY srv
+ FILES
+ OpenDRSingleObjectTracking.srv
+ )
generate_messages(
DEPENDENCIES
diff --git a/projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/example/Images/.keep b/projects/opendr_ws/src/opendr_bridge/include/opendr_bridge/.keep
similarity index 100%
rename from projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/example/Images/.keep
rename to projects/opendr_ws/src/opendr_bridge/include/opendr_bridge/.keep
diff --git a/projects/opendr_ws/src/opendr_bridge/msg/OpenDRPose2D.msg b/projects/opendr_ws/src/opendr_bridge/msg/OpenDRPose2D.msg
new file mode 100644
index 0000000000..09b1443027
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_bridge/msg/OpenDRPose2D.msg
@@ -0,0 +1,26 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This message represents a full OpenDR human pose 2D as a list of keypoints
+
+Header header
+
+# The id of the pose
+int32 pose_id
+
+# The pose detection confidence of the model
+float32 conf
+
+# A list of a human 2D pose keypoints
+OpenDRPose2DKeypoint[] keypoint_list
diff --git a/projects/opendr_ws/src/opendr_bridge/msg/OpenDRPose2DKeypoint.msg b/projects/opendr_ws/src/opendr_bridge/msg/OpenDRPose2DKeypoint.msg
new file mode 100644
index 0000000000..72d14a19f2
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_bridge/msg/OpenDRPose2DKeypoint.msg
@@ -0,0 +1,22 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This message contains all relevant information for an OpenDR human pose 2D keypoint
+
+# The kpt_name according to https://github.com/opendr-eu/opendr/blob/master/docs/reference/lightweight-open-pose.md#notes
+string kpt_name
+
+# x and y pixel position on the input image, (0, 0) is top-left corner of image
+int32 x
+int32 y
diff --git a/projects/opendr_ws/src/ros_bridge/package.xml b/projects/opendr_ws/src/opendr_bridge/package.xml
similarity index 88%
rename from projects/opendr_ws/src/ros_bridge/package.xml
rename to projects/opendr_ws/src/opendr_bridge/package.xml
index e9cb01afb1..9d68c624b2 100644
--- a/projects/opendr_ws/src/ros_bridge/package.xml
+++ b/projects/opendr_ws/src/opendr_bridge/package.xml
@@ -1,8 +1,8 @@
- ros_bridge
- 1.1.1
- OpenDR ros_bridge package. This package provides a way to translate ROS messages into OpenDR data types
+ opendr_bridge
+ 2.0.0
+ OpenDR ROS bridge package. This package provides a way to translate ROS messages into OpenDR data types
and vice versa.
OpenDR Project Coordinator
diff --git a/projects/opendr_ws/src/ros_bridge/setup.py b/projects/opendr_ws/src/opendr_bridge/setup.py
similarity index 100%
rename from projects/opendr_ws/src/ros_bridge/setup.py
rename to projects/opendr_ws/src/opendr_bridge/setup.py
diff --git a/projects/opendr_ws/src/ros_bridge/src/opendr_bridge/__init__.py b/projects/opendr_ws/src/opendr_bridge/src/opendr_bridge/__init__.py
similarity index 100%
rename from projects/opendr_ws/src/ros_bridge/src/opendr_bridge/__init__.py
rename to projects/opendr_ws/src/opendr_bridge/src/opendr_bridge/__init__.py
diff --git a/projects/opendr_ws/src/ros_bridge/src/opendr_bridge/bridge.py b/projects/opendr_ws/src/opendr_bridge/src/opendr_bridge/bridge.py
similarity index 86%
rename from projects/opendr_ws/src/ros_bridge/src/opendr_bridge/bridge.py
rename to projects/opendr_ws/src/opendr_bridge/src/opendr_bridge/bridge.py
index fe7e4171f2..215803f064 100755
--- a/projects/opendr_ws/src/ros_bridge/src/opendr_bridge/bridge.py
+++ b/projects/opendr_ws/src/opendr_bridge/src/opendr_bridge/bridge.py
@@ -28,6 +28,7 @@
from sensor_msgs.msg import Image as ImageMsg, PointCloud as PointCloudMsg, ChannelFloat32 as ChannelFloat32Msg
import rospy
from geometry_msgs.msg import Point32 as Point32Msg, Quaternion as QuaternionMsg
+from opendr_bridge.msg import OpenDRPose2D, OpenDRPose2DKeypoint
class ROSBridge:
@@ -69,51 +70,50 @@ def to_ros_image(self, image: Image, encoding: str='passthrough') -> ImageMsg:
message = self._cv_bridge.cv2_to_imgmsg(image.opencv(), encoding=encoding)
return message
- def to_ros_pose(self, pose):
+ def to_ros_pose(self, pose: Pose):
"""
- Converts an OpenDR pose into a Detection2DArray msg that can carry the same information
- Each keypoint is represented as a bbox centered at the keypoint with zero width/height. The subject id is also
- embedded on each keypoint (stored in ObjectHypothesisWithPose).
- :param pose: OpenDR pose to be converted
+ Converts an OpenDR Pose into a OpenDRPose2D msg that can carry the same information, i.e. a list of keypoints,
+ the pose detection confidence and the pose id.
+ Each keypoint is represented as an OpenDRPose2DKeypoint with x, y pixel position on input image with (0, 0)
+ being the top-left corner.
+ :param pose: OpenDR Pose to be converted to OpenDRPose2D
:type pose: engine.target.Pose
:return: ROS message with the pose
- :rtype: vision_msgs.msg.Detection2DArray
+ :rtype: opendr_bridge.msg.OpenDRPose2D
"""
data = pose.data
- keypoints = Detection2DArray()
- for i in range(data.shape[0]):
- keypoint = Detection2D()
- keypoint.bbox = BoundingBox2D()
- keypoint.results.append(ObjectHypothesisWithPose())
- keypoint.bbox.center = Pose2D()
- keypoint.bbox.center.x = data[i][0]
- keypoint.bbox.center.y = data[i][1]
- keypoint.bbox.size_x = 0
- keypoint.bbox.size_y = 0
- keypoint.results[0].id = pose.id
- if pose.confidence:
- keypoint.results[0].score = pose.confidence
- keypoints.detections.append(keypoint)
- return keypoints
+ # Setup ros pose
+ ros_pose = OpenDRPose2D()
+ ros_pose.pose_id = int(pose.id)
+ if pose.confidence:
+ ros_pose.conf = pose.confidence
- def from_ros_pose(self, ros_pose):
- """
- Converts a ROS message with pose payload into an OpenDR pose
- :param ros_pose: the pose to be converted (represented as vision_msgs.msg.Detection2DArray)
- :type ros_pose: vision_msgs.msg.Detection2DArray
- :return: an OpenDR pose
+ # Add keypoints to pose
+ for i in range(data.shape[0]):
+ ros_keypoint = OpenDRPose2DKeypoint()
+ ros_keypoint.kpt_name = pose.kpt_names[i]
+ ros_keypoint.x = data[i][0]
+ ros_keypoint.y = data[i][1]
+ # Add keypoint to pose
+ ros_pose.keypoint_list.append(ros_keypoint)
+ return ros_pose
+
+ def from_ros_pose(self, ros_pose: OpenDRPose2D):
+ """
+ Converts an OpenDRPose2D message into an OpenDR Pose.
+ :param ros_pose: the ROS pose to be converted
+ :type ros_pose: opendr_bridge.msg.OpenDRPose2D
+ :return: an OpenDR Pose
:rtype: engine.target.Pose
"""
- keypoints = ros_pose.detections
- data = []
- pose_id, confidence = None, None
+ ros_keypoints = ros_pose.keypoint_list
+ keypoints = []
+ pose_id, confidence = ros_pose.pose_id, ros_pose.conf
- for keypoint in keypoints:
- data.append(keypoint.bbox.center.x)
- data.append(keypoint.bbox.center.y)
- confidence = keypoint.results[0].score
- pose_id = keypoint.results[0].id
- data = np.asarray(data).reshape((-1, 2))
+ for ros_keypoint in ros_keypoints:
+ keypoints.append(int(ros_keypoint.x))
+ keypoints.append(int(ros_keypoint.y))
+ data = np.asarray(keypoints).reshape((-1, 2))
pose = Pose(data, confidence)
pose.id = pose_id
@@ -213,7 +213,7 @@ def to_ros_boxes(self, box_list):
ros_box.bbox.center.y = box.top + box.height / 2.
ros_box.bbox.size_x = box.width
ros_box.bbox.size_y = box.height
- ros_box.results[0].id = box.name
+ ros_box.results[0].id = int(box.name)
if box.confidence:
ros_box.results[0].score = box.confidence
ros_boxes.detections.append(ros_box)
@@ -235,8 +235,8 @@ def from_ros_boxes(self, ros_detections):
height = box.bbox.size_y
left = box.bbox.center.x - width / 2.
top = box.bbox.center.y - height / 2.
- id = box.results[0].id
- bbox = BoundingBox(top=top, left=left, width=width, height=height, name=id)
+ _id = int(box.results[0].id)
+ bbox = BoundingBox(top=top, left=left, width=width, height=height, name=_id)
bboxes.data.append(bbox)
return bboxes
@@ -275,6 +275,50 @@ def from_ros_tracking_annotation(self, ros_detections, ros_tracking_ids, frame=-
return TrackingAnnotationList(boxes)
+ def from_ros_single_tracking_annotation(self, ros_detection_box):
+ """
+ Converts a pair of ROS messages with bounding boxes and tracking ids into an OpenDR TrackingAnnotationList
+ :param ros_detection_box: The boxes to be converted.
+ :type ros_detection_box: vision_msgs.msg.Detection2D
+ :return: An OpenDR TrackingAnnotationList
+ :rtype: engine.target.TrackingAnnotationList
+ """
+ width = ros_detection_box.bbox.size_x
+ height = ros_detection_box.bbox.size_y
+ left = ros_detection_box.bbox.center.x - width / 2.
+ top = ros_detection_box.bbox.center.y - height / 2.
+ id = 0
+ bbox = TrackingAnnotation(
+ name=id,
+ left=left,
+ top=top,
+ width=width,
+ height=height,
+ id=0,
+ frame=-1
+ )
+ return bbox
+
+ def to_ros_single_tracking_annotation(self, tracking_annotation):
+ """
+ Converts a pair of ROS messages with bounding boxes and tracking ids into an OpenDR TrackingAnnotationList
+ :param tracking_annotation: The box to be converted.
+ :type tracking_annotation: engine.target.TrackingAnnotation
+ :return: A ROS vision_msgs.msg.Detection2D
+ :rtype: vision_msgs.msg.Detection2D
+ """
+ ros_box = Detection2D()
+ ros_box.bbox = BoundingBox2D()
+ ros_box.results.append(ObjectHypothesisWithPose())
+ ros_box.bbox.center = Pose2D()
+ ros_box.bbox.center.x = tracking_annotation.left + tracking_annotation.width / 2.0
+ ros_box.bbox.center.y = tracking_annotation.top + tracking_annotation.height / 2.0
+ ros_box.bbox.size_x = tracking_annotation.width
+ ros_box.bbox.size_y = tracking_annotation.height
+ ros_box.results[0].id = int(tracking_annotation.name)
+ ros_box.results[0].score = -1
+ return ros_box
+
def to_ros_bounding_box_list(self, bounding_box_list):
"""
Converts an OpenDR bounding_box_list into a Detection2DArray msg that can carry the same information
@@ -294,7 +338,7 @@ def to_ros_bounding_box_list(self, bounding_box_list):
detection.bbox.center.y = bounding_box.top + bounding_box.height / 2.0
detection.bbox.size_x = bounding_box.width
detection.bbox.size_y = bounding_box.height
- detection.results[0].id = bounding_box.name
+ detection.results[0].id = int(bounding_box.name)
detection.results[0].score = bounding_box.confidence
detections.detections.append(detection)
return detections
diff --git a/projects/opendr_ws/src/opendr_bridge/srv/OpenDRSingleObjectTracking.srv b/projects/opendr_ws/src/opendr_bridge/srv/OpenDRSingleObjectTracking.srv
new file mode 100644
index 0000000000..7ca4024125
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_bridge/srv/OpenDRSingleObjectTracking.srv
@@ -0,0 +1,3 @@
+vision_msgs/Detection2D init_box
+---
+bool success
diff --git a/projects/opendr_ws/src/data_generation/CMakeLists.txt b/projects/opendr_ws/src/opendr_data_generation/CMakeLists.txt
similarity index 85%
rename from projects/opendr_ws/src/data_generation/CMakeLists.txt
rename to projects/opendr_ws/src/opendr_data_generation/CMakeLists.txt
index 2a43cfdb27..ed273ea805 100644
--- a/projects/opendr_ws/src/data_generation/CMakeLists.txt
+++ b/projects/opendr_ws/src/opendr_data_generation/CMakeLists.txt
@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.0.2)
-project(data_generation)
+project(opendr_data_generation)
find_package(catkin REQUIRED COMPONENTS
roscpp
@@ -27,6 +27,6 @@ include_directories(
#############
catkin_install_python(PROGRAMS
- scripts/synthetic_facial_generation.py
+ scripts/synthetic_facial_generation_node.py
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
diff --git a/projects/opendr_ws/src/data_generation/README.md b/projects/opendr_ws/src/opendr_data_generation/README.md
similarity index 97%
rename from projects/opendr_ws/src/data_generation/README.md
rename to projects/opendr_ws/src/opendr_data_generation/README.md
index 523347f6a0..67390f9918 100644
--- a/projects/opendr_ws/src/data_generation/README.md
+++ b/projects/opendr_ws/src/opendr_data_generation/README.md
@@ -1,28 +1,28 @@
-# Perception Package
-
-This package contains ROS nodes related to data generation package of OpenDR.
-
-## Pose Estimation ROS Node
-Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
-
-
-1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g.,
-```shell
-export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH"
-```
-
-2. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package):
-
-```shell
-rosrun usb_cam usb_cam_node
-```
-
-3. You are then ready to start the synthetic data generation node
-
-```shell
-rosrun data_generation synthetic_facial_generation.py
-```
-
-3. You can examine the published multiview facial images stream using `rosrun rqt_image_view rqt_image_view` (select the topic `/opendr/synthetic_facial_images`) or `rostopic echo /opendr/synthetic_facial_images`
-
-
+# Perception Package
+
+This package contains ROS nodes related to data generation package of OpenDR.
+
+## Pose Estimation ROS Node
+Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
+
+
+1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g.,
+```shell
+export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH"
+```
+
+2. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package):
+
+```shell
+rosrun usb_cam usb_cam_node
+```
+
+3. You are then ready to start the synthetic data generation node
+
+```shell
+rosrun data_generation synthetic_facial_generation.py
+```
+
+3. You can examine the published multiview facial images stream using `rosrun rqt_image_view rqt_image_view` (select the topic `/opendr/synthetic_facial_images`) or `rostopic echo /opendr/synthetic_facial_images`
+
+
diff --git a/projects/opendr_ws/src/data_generation/package.xml b/projects/opendr_ws/src/opendr_data_generation/package.xml
similarity index 93%
rename from projects/opendr_ws/src/data_generation/package.xml
rename to projects/opendr_ws/src/opendr_data_generation/package.xml
index 57d1e6e1f7..f4733b2ada 100644
--- a/projects/opendr_ws/src/data_generation/package.xml
+++ b/projects/opendr_ws/src/opendr_data_generation/package.xml
@@ -1,7 +1,7 @@
- data_generation
- 1.1.1
+ opendr_data_generation
+ 2.0.0OpenDR's ROS nodes for data generation packageOpenDR Project CoordinatorApache License v2.0
diff --git a/projects/opendr_ws/src/data_generation/scripts/synthetic_facial_generation.py b/projects/opendr_ws/src/opendr_data_generation/scripts/synthetic_facial_generation_node.py
similarity index 100%
rename from projects/opendr_ws/src/data_generation/scripts/synthetic_facial_generation.py
rename to projects/opendr_ws/src/opendr_data_generation/scripts/synthetic_facial_generation_node.py
diff --git a/projects/opendr_ws/src/perception/CMakeLists.txt b/projects/opendr_ws/src/opendr_perception/CMakeLists.txt
similarity index 51%
rename from projects/opendr_ws/src/perception/CMakeLists.txt
rename to projects/opendr_ws/src/opendr_perception/CMakeLists.txt
index a47f5f9c4b..c2a4a6278b 100644
--- a/projects/opendr_ws/src/perception/CMakeLists.txt
+++ b/projects/opendr_ws/src/opendr_perception/CMakeLists.txt
@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.0.2)
-project(perception)
+project(opendr_perception)
find_package(catkin REQUIRED COMPONENTS
roscpp
@@ -28,10 +28,15 @@ include_directories(
#############
catkin_install_python(PROGRAMS
- scripts/pose_estimation.py
- scripts/fall_detection.py
- scripts/object_detection_2d_detr.py
- scripts/object_detection_2d_gem.py
- scripts/semantic_segmentation_bisenet.py
+ scripts/pose_estimation_node.py
+ scripts/hr_pose_estimation_node.py
+ scripts/fall_detection_node.py
+ scripts/object_detection_2d_nanodet_node.py
+ scripts/object_detection_2d_yolov5_node.py
+ scripts/object_detection_2d_detr_node.py
+ scripts/object_detection_2d_gem_node.py
+ scripts/semantic_segmentation_bisenet_node.py
+ scripts/object_tracking_2d_siamrpn_node.py
+ scripts/facial_emotion_estimation_node.py
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
diff --git a/projects/opendr_ws/src/opendr_perception/README.md b/projects/opendr_ws/src/opendr_perception/README.md
new file mode 100644
index 0000000000..c868a9648d
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/README.md
@@ -0,0 +1,849 @@
+# OpenDR Perception Package
+
+This package contains ROS nodes related to the perception package of OpenDR.
+
+---
+
+## Prerequisites
+
+Before you can run any of the package's ROS nodes, some prerequisites need to be fulfilled:
+1. First of all, you need to [set up the required packages, build and source your workspace.](../../README.md#first-time-setup)
+2. Start roscore by running `roscore &`, if you haven't already done so.
+3. _(Optional for nodes with [RGB input](#rgb-input-nodes))_
+
+ For basic usage and testing, all the toolkit's ROS nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam`, which is installed with the toolkit.
+ You can run the webcam node in the terminal with the workspace sourced using:
+ ```shell
+ rosrun usb_cam usb_cam_node &
+ ```
+ By default, the USB cam node publishes images on `/usb_cam/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument.
+ As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
+
+---
+
+## Notes
+
+- ### Display output images with rqt_image_view
+ For any node that outputs images, `rqt_image_view` can be used to display them by running the following command:
+ ```shell
+ rosrun rqt_image_view rqt_image_view &
+ ```
+ A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
+ Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
+
+- ### Echo node output
+ All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:
+ ```shell
+ rostopic echo /opendr/topic_name
+ ```
+ You can find out the default topic name for each node, in its documentation below.
+
+- ### Increase performance by disabling output
+ Optionally, nodes can be modified via command line arguments, which are presented for each node separately below.
+ Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc.
+ When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic.
+ This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.
+
+ _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._
+
+- ### An example diagram of OpenDR nodes running
+ ![Pose Estimation ROS node running diagram](../../images/opendr_node_diagram.png)
+ - On the left, the `usb_cam` node can be seen, which is using a system camera to publish images on the `/usb_cam/image_raw` topic.
+ - In the middle, OpenDR's pose estimation node is running taking as input the published image. By default, the node has its input topic set to `/usb_cam/image_raw`.
+ - To the right the two output topics of the pose estimation node can be seen.
+ The bottom topic `/opendr/image_pose_annotated` is the annotated image which can be easily viewed with `rqt_image_view` as explained earlier.
+ The other topic `/opendr/poses` is the detection message which contains the detected poses' detailed information.
+ This message can be easily viewed by running `rostopic echo /opendr/poses` in a terminal with the OpenDR ROS workspace sourced.
+
+
+
+----
+## RGB input nodes
+
+### Pose Estimation ROS Node
+
+You can find the pose estimation ROS node python script [here](./scripts/pose_estimation_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md).
+The node publishes the detected poses in [OpenDR's 2D pose message format](../opendr_bridge/msg/OpenDRPose2D.msg), which saves a list of [OpenDR's keypoint message format](../opendr_bridge/msg/OpenDRPose2DKeypoint.msg).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the pose detection node:
+ ```shell
+ rosrun opendr_perception pose_estimation_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--accelerate`: acceleration flag that causes pose estimation to run faster but with less accuracy
+
+3. Default output topics:
+ - Output images: `/opendr/image_pose_annotated`
+ - Detection messages: `/opendr/poses`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Fall Detection ROS Node
+
+You can find the fall detection ROS node python script [here](./scripts/fall_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md).
+Fall detection uses the toolkit's pose estimation tool internally.
+
+
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the fall detection node:
+
+ ```shell
+ rosrun opendr_perception fall_detection_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--accelerate`: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy
+
+3. Default output topics:
+ - Output images: `/opendr/image_fallen_annotated`
+ - Detection messages: `/opendr/fallen`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Face Detection ROS Node
+
+The face detection ROS node supports both the ResNet and MobileNet versions, the latter of which performs masked face detection as well.
+
+You can find the face detection ROS node python script [here](./scripts/face_detection_retinaface_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [face detection tool](../../../../src/opendr/perception/object_detection_2d/retinaface/retinaface_learner.py) whose documentation can be found [here](../../../../docs/reference/face-detection-2d-retinaface.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the face detection node
+
+ ```shell
+ rosrun opendr_perception face_detection_retinaface_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_faces_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/faces`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--backbone BACKBONE`: retinaface backbone, options are either `mnet` or `resnet`, where `mnet` detects masked faces as well (default=`resnet`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_faces_annotated`
+ - Detection messages: `/opendr/faces`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Face Recognition ROS Node
+
+You can find the face recognition ROS node python script [here](./scripts/face_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [face recognition tool](../../../../src/opendr/perception/face_recognition/face_recognition_learner.py) whose documentation can be found [here](../../../../docs/reference/face-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the face recognition node:
+
+ ```shell
+ rosrun opendr_perception face_recognition_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_face_reco_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition`)
+ - `-id or --detections_id_topic DETECTIONS_ID_TOPIC`: topic name for detection ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition_id`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--backbone BACKBONE`: backbone network (default=`mobilefacenet`)
+ - `--dataset_path DATASET_PATH`: path of the directory where the images of the faces to be recognized are stored (default=`./database`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_face_reco_annotated`
+ - Detection messages: `/opendr/face_recognition` and `/opendr/face_recognition_id`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+Reference images should be placed in a defined structure like:
+- imgs
+ - ID1
+ - image1
+ - image2
+ - ID2
+ - ID3
+ - ...
+
+The default dataset path is `./database`. Please use the `--database_path ./your/path/` argument to define a custom one.
+Τhe name of the sub-folder, e.g. ID1, will be published under `/opendr/face_recognition_id`.
+
+The database entry and the returned confidence is published under the topic name `/opendr/face_recognition`, and the human-readable ID
+under `/opendr/face_recognition_id`.
+
+### 2D Object Detection ROS Nodes
+
+For 2D object detection, there are several ROS nodes implemented using various algorithms. The generic object detectors are SSD, YOLOv3, YOLOv5, CenterNet, Nanodet and DETR.
+
+You can find the 2D object detection ROS node python scripts here:
+[SSD node](./scripts/object_detection_2d_ssd_node.py), [YOLOv3 node](./scripts/object_detection_2d_yolov3_node.py), [YOLOv5 node](./scripts/object_detection_2d_yolov5_node.py), [CenterNet node](./scripts/object_detection_2d_centernet_node.py), [Nanodet node](./scripts/object_detection_2d_nanodet_node.py) and [DETR node](./scripts/object_detection_2d_detr_node.py),
+where you can inspect the code and modify it as you wish to fit your needs.
+The nodes makes use of the toolkit's various 2D object detection tools:
+[SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [YOLOv5 tool](../../../../src/opendr/perception/object_detection_2d/yolov5/yolov5_learner.py),
+[CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [Nanodet tool](../../../../src/opendr/perception/object_detection_2d/nanodet/nanodet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py),
+whose documentation can be found here:
+[SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [YOLOv5 docs](../../../../docs/reference/object-detection-2d-yolov5.md),
+[CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [Nanodet docs](../../../../docs/reference/nanodet.md), [DETR docs](../../../../docs/reference/detr.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start a 2D object detector node:
+ 1. SSD node
+ ```shell
+ rosrun opendr_perception object_detection_2d_ssd_node.py
+ ```
+ The following optional arguments are available for the SSD node:
+ - `--backbone BACKBONE`: Backbone network (default=`vgg16_atrous`)
+ - `--nms_type NMS_TYPE`: Non-Maximum Suppression type options are `default`, `seq2seq-nms`, `soft-nms`, `fast-nms`, `cluster-nms` (default=`default`)
+
+ 2. YOLOv3 node
+ ```shell
+ rosrun opendr_perception object_detection_2d_yolov3_node.py
+ ```
+ The following optional argument is available for the YOLOv3 node:
+ - `--backbone BACKBONE`: Backbone network (default=`darknet53`)
+
+ 3. YOLOv5 node
+ ```shell
+ rosrun opendr_perception object_detection_2d_yolov5_node.py
+ ```
+ The following optional argument is available for the YOLOv5 node:
+ - `--model_name MODEL_NAME`: Network architecture, options are `yolov5s`, `yolov5n`, `yolov5m`, `yolov5l`, `yolov5x`, `yolov5n6`, `yolov5s6`, `yolov5m6`, `yolov5l6`, `custom` (default=`yolov5s`)
+
+ 4. CenterNet node
+ ```shell
+ rosrun opendr_perception object_detection_2d_centernet_node.py
+ ```
+ The following optional argument is available for the CenterNet node:
+ - `--backbone BACKBONE`: Backbone network (default=`resnet50_v1b`)
+
+ 5. Nanodet node
+ ```shell
+ rosrun opendr_perception object_detection_2d_nanodet_node.py
+ ```
+ The following optional argument is available for the Nanodet node:
+ - `--model Model`: Model that config file will be used (default=`plus_m_1.5x_416`)
+
+ 6. DETR node
+ ```shell
+ rosrun opendr_perception object_detection_2d_detr_node.py
+ ```
+
+ The following optional arguments are available for all nodes above:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`)
+ - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_objects_annotated`
+ - Detection messages: `/opendr/objects`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### 2D Single Object Tracking ROS Node
+
+You can find the single object tracking 2D ROS node python script [here](./scripts/object_tracking_2d_siamrpn_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [single object tracking 2D SiamRPN tool](../../../../src/opendr/perception/object_tracking_2d/siamrpn/siamrpn_learner.py) whose documentation can be found [here](../../../../docs/reference/object-tracking-2d-siamrpn.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the single object tracking 2D node:
+
+ ```shell
+ rosrun opendr_perception object_tracking_2d_siamrpn_node.py
+ ```
+
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_tracking_annotated`)
+ - `-t or --tracker_topic TRACKER_TOPIC`: topic name for tracker messages, `None` to stop the node from publishing on this topic (default=`/opendr/tracked_object`)
+ - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_tracking_annotated`
+ - Detection messages: `/opendr/tracked_object`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+To initialize this node it is required to provide a bounding box of an object to track.
+This is achieved by initializing one of the toolkit's 2D object detectors (YOLOv3) and running object detection once on the input.
+Afterwards, **the detected bounding box that is closest to the center of the image** is used to initialize the tracker.
+Feel free to modify the node to initialize it in a different way that matches your use case.
+
+### 2D Object Tracking ROS Nodes
+
+For 2D object tracking, there two ROS nodes provided, one using Deep Sort and one using FairMOT which use either pretrained models, or custom trained models.
+The predicted tracking annotations are split into two topics with detections and tracking IDs. Additionally, an annotated image is generated.
+
+You can find the 2D object detection ROS node python scripts here: [Deep Sort node](./scripts/object_tracking_2d_deep_sort_node.py) and [FairMOT node](./scripts/object_tracking_2d_fair_mot_node.py)
+where you can inspect the code and modify it as you wish to fit your needs.
+The nodes makes use of the toolkit's [object tracking 2D - Deep Sort tool](../../../../src/opendr/perception/object_tracking_2d/deep_sort/object_tracking_2d_deep_sort_learner.py)
+and [object tracking 2D - FairMOT tool](../../../../src/opendr/perception/object_tracking_2d/fair_mot/object_tracking_2d_fair_mot_learner.py)
+whose documentation can be found here: [Deep Sort docs](../../../../docs/reference/object-tracking-2d-deep-sort.md), [FairMOT docs](../../../../docs/reference/object-tracking-2d-fair-mot.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start a 2D object tracking node:
+ 1. Deep Sort node
+ ```shell
+ rosrun opendr_perception object_tracking_2d_deep_sort_node.py
+ ```
+ The following optional argument is available for the Deep Sort node:
+ - `-n --model_name MODEL_NAME`: name of the trained model (default=`deep_sort`)
+ 2. FairMOT node
+ ```shell
+ rosrun opendr_perception object_tracking_2d_fair_mot_node.py
+ ```
+ The following optional argument is available for the FairMOT node:
+ - `-n --model_name MODEL_NAME`: name of the trained model (default=`fairmot_dla34`)
+
+ The following optional arguments are available for both nodes:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`)
+ - `-t or --tracking_id_topic TRACKING_ID_TOPIC`: topic name for tracking ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects_tracking_id`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `-td --temp_dir TEMP_DIR`: path to a temporary directory with models (default=`temp`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_objects_annotated`
+ - Detection messages: `/opendr/objects`
+ - Tracking ID messages: `/opendr/objects_tracking_id`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+An [image dataset node](#image-dataset-ros-node) is also provided to be used along these nodes.
+Make sure to change the default input topic of the tracking node if you are not using the USB cam node.
+
+### Panoptic Segmentation ROS Node
+
+You can find the panoptic segmentation ROS node python script [here](./scripts/panoptic_segmentation_efficient_ps_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md)
+and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the panoptic segmentation node:
+
+ ```shell
+ rosrun opendr_perception panoptic_segmentation_efficient_ps_node.py
+ ```
+
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/usb_cam/image_raw`)
+ - `-oh --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance`, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic`)
+ - `-ov --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the panoptic segmentation map as an RGB image on this topic or a more detailed overview if using the `--detailed_visualization` flag, `None` to stop the node from publishing on this topic (default=`opendr/panoptic/rgb_visualization`)
+ - `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated)
+ - `--checkpoint CHECKPOINT` : download pretrained models [cityscapes, kitti] or load from the provided path (default=`cityscapes`)
+
+3. Default output topics:
+ - Output images: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`, `/opendr/panoptic/rgb_visualization`
+ - Detection messages: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Semantic Segmentation ROS Node
+
+You can find the semantic segmentation ROS node python script [here](./scripts/semantic_segmentation_bisenet_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [semantic segmentation tool](../../../../src/opendr/perception/semantic_segmentation/bisenet/bisenet_learner.py) whose documentation can be found [here](../../../../docs/reference/semantic-segmentation.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the semantic segmentation node:
+
+ ```shell
+ rosrun opendr_perception semantic_segmentation_bisenet_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: topic to which we are publishing the heatmap in the form of a ROS image containing class IDs, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap`)
+ - `-ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap_visualization`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/heatmap`, `/opendr/heatmap_visualization`
+ - Detection messages: `/opendr/heatmap`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+On the table below you can find the detectable classes and their corresponding IDs:
+
+| Class | Bicyclist | Building | Car | Column Pole | Fence | Pedestrian | Road | Sidewalk | Sign Symbol | Sky | Tree | Unknown |
+|--------|-----------|----------|-----|-------------|-------|------------|------|----------|-------------|-----|------|---------|
+| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
+
+### Image-based Facial Emotion Estimation ROS Node
+
+You can find the image-based facial emotion estimation ROS node python script [here](./scripts/facial_emotion_estimation_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's image-based facial emotion estimation tool which can be found [here](../../../../src/opendr/perception/facial_expression_recognition/image_based_facial_emotion_estimation/facial_emotion_learner.py)
+whose documentation can be found [here](../../../../docs/reference/image_based_facial_emotion_estimation.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the image-based facial emotion estimation node:
+
+ ```shell
+ rosrun opendr_perception facial_emotion_estimation_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_emotion_estimation_annotated`)
+ - `-e or --output_emotions_topic OUTPUT_EMOTIONS_TOPIC`: topic to which we are publishing the facial emotion results, `None` to stop the node from publishing on this topic (default=`"/opendr/facial_emotion_estimation"`)
+ - `-m or --output_emotions_description_topic OUTPUT_EMOTIONS_DESCRIPTION_TOPIC`: topic to which we are publishing the description of the estimated facial emotion, `None` to stop the node from publishing on this topic (default=`/opendr/facial_emotion_estimation_description`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_emotion_estimation_annotated`
+ - Detection messages: `/opendr/facial_emotion_estimation`, `/opendr/facial_emotion_estimation_description`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+This node requires the detection of a face first. This is achieved by including of the toolkit's face detector and running face detection on the input.
+Afterwards, the detected bounding box of the face is cropped and fed into the facial emotion estimator.
+Feel free to modify the node to detect faces in a different way that matches your use case.
+
+### Landmark-based Facial Expression Recognition ROS Node
+
+A ROS node for performing landmark-based facial expression recognition using a trained model on AFEW, CK+ or Oulu-CASIA datasets.
+OpenDR does not include a pretrained model, so one should be provided by the user.
+An alternative would be to use the [image-based facial expression estimation node](#image-based-facial-emotion-estimation-ros-node) provided by the toolkit.
+
+You can find the landmark-based facial expression recognition ROS node python script [here](./scripts/landmark_based_facial_expression_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's landmark-based facial expression recognition tool which can be found [here](../../../../src/opendr/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/progressive_spatio_temporal_bln_learner.py)
+whose documentation can be found [here](../../../../docs/reference/landmark-based-facial-expression-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the landmark-based facial expression recognition node:
+
+ ```shell
+ rosrun opendr_perception landmark_based_facial_expression_recognition_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized facial expression category info, `None` to stop the node from publishing on this topic (default=`"/opendr/landmark_expression_recognition"`)
+ - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic to which we are publishing the description of the recognized facial expression, `None` to stop the node from publishing on this topic (default=`/opendr/landmark_expression_recognition_description`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--model`: architecture to use for facial expression recognition, options are `pstbln_ck+`, `pstbln_casia`, `pstbln_afew` (default=`pstbln_afew`)
+ - `-s or --shape_predictor SHAPE_PREDICTOR`: shape predictor (landmark_extractor) to use (default=`./predictor_path`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/landmark_expression_recognition`, `/opendr/landmark_expression_recognition_description`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Skeleton-based Human Action Recognition ROS Node
+
+A ROS node for performing skeleton-based human action recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset.
+The human body poses of the image are first extracted by the lightweight OpenPose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized.
+
+You can find the skeleton-based human action recognition ROS node python script [here](./scripts/skeleton_based_action_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's skeleton-based human action recognition tool which can be found [here for ST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py)
+and [here for PST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py)
+whose documentation can be found [here](../../../../docs/reference/skeleton-based-action-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the skeleton-based human action recognition node:
+
+ ```shell
+ rosrun opendr_perception skeleton_based_action_recognition_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output pose-annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)
+ - `-p or --pose_annotations_topic POSE_ANNOTATIONS_TOPIC`: topic name for pose annotations, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)
+ - `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/skeleton_recognized_action"`)
+ - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/skeleton_recognized_action_description`)
+ - `--model`: model to use, options are `stgcn` or `pstgcn`, (default=`stgcn`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/skeleton_based_action_recognition`, `/opendr/skeleton_based_action_recognition_description`, `/opendr/poses`
+ - Output images: `/opendr/image_pose_annotated`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Video Human Activity Recognition ROS Node
+
+A ROS node for performing human activity recognition using either CoX3D or X3D models pretrained on Kinetics400.
+
+You can find the video human activity recognition ROS node python script [here](./scripts/video_activity_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's video human activity recognition tools which can be found [here for CoX3D](../../../../src/opendr/perception/activity_recognition/cox3d/cox3d_learner.py) and
+[here for X3D](../../../../src/opendr/perception/activity_recognition/x3d/x3d_learner.py) whose documentation can be found [here](../../../../docs/reference/activity-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the video human activity recognition node:
+
+ ```shell
+ rosrun opendr_perception video_activity_recognition_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
+ - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized activity, `None` to stop the node from publishing on this topic (default=`"/opendr/human_activity_recognition"`)
+ - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic to which we are publishing the ID of the recognized action, `None` to stop the node from publishing on this topic (default=`/opendr/human_activity_recognition_description`)
+ - `--model`: architecture to use for human activity recognition, options are `cox3d-s`, `cox3d-m`, `cox3d-l`, `x3d-xs`, `x3d-s`, `x3d-m`, or `x3d-l` (default=`cox3d-m`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/human_activity_recognition`, `/opendr/human_activity_recognition_description`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+You can find the corresponding IDs regarding activity recognition [here](https://github.com/opendr-eu/opendr/blob/master/src/opendr/perception/activity_recognition/datasets/kinetics400_classes.csv).
+
+## RGB + Infrared input
+
+### 2D Object Detection GEM ROS Node
+
+You can find the object detection 2D GEM ROS node python script [here](./scripts/object_detection_2d_gem_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [object detection 2D GEM tool](../../../../src/opendr/perception/object_detection_2d/gem/gem_learner.py)
+whose documentation can be found [here](../../../../docs/reference/gem.md).
+
+#### Instructions for basic usage:
+
+1. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera.
+ These points can be selected using a [utility tool](../../../../src/opendr/perception/object_detection_2d/utils/get_color_infra_alignment.py) that is provided in the toolkit.
+
+2. Pass the points you have found as *pts_color* and *pts_infra* arguments to the [ROS GEM node](./scripts/object_detection_2d_gem.py).
+
+3. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed [realsense2_camera](http://wiki.ros.org/realsense2_camera)):
+
+ ```shell
+ roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480
+ ```
+
+4. You are then ready to start the object detection 2d GEM node:
+
+ ```shell
+ rosrun opendr_perception object_detection_2d_gem_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/camera/color/image_raw`)
+ - `-ii or --input_infra_image_topic INPUT_INFRA_IMAGE_TOPIC`: topic name for input infrared image (default=`/camera/infra/image_raw`)
+ - `-oc or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/rgb_image_objects_annotated`)
+ - `-oi or --output_infra_image_topic OUTPUT_INFRA_IMAGE_TOPIC`: topic name for output annotated infrared image, `None` to stop the node from publishing on this topic (default=`/opendr/infra_image_objects_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+5. Default output topics:
+ - Output RGB images: `/opendr/rgb_image_objects_annotated`
+ - Output infrared images: `/opendr/infra_image_objects_annotated`
+ - Detection messages: `/opendr/objects`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## RGBD input
+
+### RGBD Hand Gesture Recognition ROS Node
+A ROS node for performing hand gesture recognition using a MobileNetv2 model trained on HANDS dataset.
+The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2.
+
+You can find the RGBD hand gesture recognition ROS node python script [here](./scripts/rgbd_hand_gesture_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [hand gesture recognition tool](../../../../src/opendr/perception/multimodal_human_centric/rgbd_hand_gesture_learner/rgbd_hand_gesture_learner.py)
+whose documentation can be found [here](../../../../docs/reference/rgbd-hand-gesture-learner.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images from an RGBD camera. Remember to modify the input topics using the arguments in step 2 if needed.
+
+2. You are then ready to start the hand gesture recognition node:
+ ```shell
+ rosrun opendr_perception rgbd_hand_gesture_recognition_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/kinect2/qhd/image_color_rect`)
+ - `-id or --input_depth_image_topic INPUT_DEPTH_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`)
+ - `-o or --output_gestures_topic OUTPUT_GESTURES_TOPIC`: topic name for predicted gesture class (default=`/opendr/gestures`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Detection messages:`/opendr/gestures`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## RGB + Audio input
+
+### Audiovisual Emotion Recognition ROS Node
+
+You can find the audiovisual emotion recognition ROS node python script [here](./scripts/audiovisual_emotion_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [audiovisual emotion recognition tool](../../../../src/opendr/perception/multimodal_human_centric/audiovisual_emotion_learner/avlearner.py),
+whose documentation can be found [here](../../../../docs/reference/audiovisual-emotion-recognition-learner.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+2. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2 if needed.
+3. You are then ready to start the audiovisual emotion recognition node
+
+ ```shell
+ rosrun opendr_perception audiovisual_emotion_recognition_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-iv or --input_video_topic INPUT_VIDEO_TOPIC`: topic name for input video, expects detected face of size 224x224 (default=`/usb_cam/image_raw`)
+ - `-ia or --input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`)
+ - `-o or --output_emotions_topic OUTPUT_EMOTIONS_TOPIC`: topic to which we are publishing the predicted emotion (default=`/opendr/audiovisual_emotion`)
+ - `--buffer_size BUFFER_SIZE`: length of audio and video in seconds, (default=`3.6`)
+ - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server
+
+4. Default output topics:
+ - Detection messages: `/opendr/audiovisual_emotion`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## Audio input
+
+### Speech Command Recognition ROS Node
+
+A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset.
+
+You can find the speech command recognition ROS node python script [here](./scripts/speech_command_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's speech command recognition tools:
+[EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py)
+whose documentation can be found here:
+[EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing audio. Remember to modify the input topics using the arguments in step 2, if needed.
+
+2. You are then ready to start the speech command recognition node
+
+ ```shell
+ rosrun opendr_perception speech_command_recognition_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio/audio`)
+ - `-o or --output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC`: topic name for speech command output (default=`/opendr/speech_recognition`)
+ - `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds (default=`1.5`)
+ - `--model MODEL`: the model to use, choices are `matchboxnet`, `edgespeechnets` or `quad_selfonn` (default=`matchboxnet`)
+ - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server
+
+3. Default output topics:
+ - Detection messages, class id and confidence: `/opendr/speech_recognition`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used.
+
+----
+## Point cloud input
+
+### 3D Object Detection Voxel ROS Node
+
+A ROS node for performing 3D object detection Voxel using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models.
+
+You can find the 3D object detection Voxel ROS node python script [here](./scripts/object_detection_3d_voxel_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [3D object detection Voxel tool](../../../../src/opendr/perception/object_detection_3d/voxel_object_detection_3d/voxel_object_detection_3d_learner.py)
+whose documentation can be found [here](../../../../docs/reference/voxel-object-detection-3d.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing point clouds. OpenDR provides a [point cloud dataset node](#point-cloud-dataset-ros-node) for convenience.
+
+2. You are then ready to start the 3D object detection node:
+
+ ```shell
+ rosrun opendr_perception object_detection_3d_voxel_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC`: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=`/opendr/dataset_point_cloud`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/objects3d`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `-n or --model_name MODEL_NAME`: name of the trained model (default=`tanet_car_xyres_16`)
+ - `-c or --model_config_path MODEL_CONFIG_PATH`: path to a model .proto config (default=`../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/objects3d`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### 3D Object Tracking AB3DMOT ROS Node
+
+A ROS node for performing 3D object tracking using AB3DMOT stateless method.
+This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids.
+The predicted tracking annotations are split into two topics with detections and tracking IDs.
+
+You can find the 3D object tracking AB3DMOT ROS node python script [here](./scripts/object_tracking_3d_ab3dmot_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [3D object tracking AB3DMOT tool](../../../../src/opendr/perception/object_tracking_3d/ab3dmot/object_tracking_3d_ab3dmot_learner.py)
+whose documentation can be found [here](../../../../docs/reference/object-tracking-3d-ab3dmot.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing point clouds. OpenDR provides a [point cloud dataset node](#point-cloud-dataset-ros-node) for convenience.
+
+2. You are then ready to start the 3D object tracking node:
+
+ ```shell
+ rosrun opendr_perception object_tracking_3d_ab3dmot_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC`: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=`/opendr/dataset_point_cloud`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects3d`)
+ - `-t or --tracking3d_id_topic TRACKING3D_ID_TOPIC`: topic name for output tracking IDs with the same element count as in detection topic, `None` to stop the node from publishing on this topic (default=`/opendr/objects_tracking_id`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `-dn or --detector_model_name DETECTOR_MODEL_NAME`: name of the trained model (default=`tanet_car_xyres_16`)
+ - `-dc or --detector_model_config_path DETECTOR_MODEL_CONFIG_PATH`: path to a model .proto config (default=`../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/objects3d`
+ - Tracking ID messages: `/opendr/objects_tracking_id`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## Biosignal input
+
+### Heart Anomaly Detection ROS Node
+
+A ROS node for performing heart anomaly (atrial fibrillation) detection from ECG data using GRU or ANBOF models trained on AF dataset.
+
+You can find the heart anomaly detection ROS node python script [here](./scripts/heart_anomaly_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool](../../../../src/opendr/perception/heart_anomaly_detection/attention_neural_bag_of_feature/attention_neural_bag_of_feature_learner.py) and
+[GRU tool](../../../../src/opendr/perception/heart_anomaly_detection/gated_recurrent_unit/gated_recurrent_unit_learner.py), whose documentation can be found here:
+[ANBOF docs](../../../../docs/reference/attention-neural-bag-of-feature-learner.md) and [GRU docs](../../../../docs/reference/gated-recurrent-unit-learner.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing ECG data.
+
+2. You are then ready to start the heart anomaly detection node:
+
+ ```shell
+ rosrun opendr_perception heart_anomaly_detection_node.py
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_ecg_topic INPUT_ECG_TOPIC`: topic name for input ECG data (default=`/ecg/ecg`)
+ - `-o or --output_heart_anomaly_topic OUTPUT_HEART_ANOMALY_TOPIC`: topic name for heart anomaly detection (default=`/opendr/heart_anomaly`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--model MODEL`: the model to use, choices are `anbof` or `gru` (default=`anbof`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/heart_anomaly`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## Dataset ROS Nodes
+
+The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor.
+Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair.
+If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic.
+The OpenDR toolkit currently provides two such nodes, an image dataset node and a point cloud dataset node.
+
+### Image Dataset ROS Node
+
+The image dataset node downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic,
+which is intended to be used with the [2D object tracking nodes](#2d-object-tracking-ros-nodes).
+
+You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements,
+to use alongside other nodes and datasets.
+You can inspect [the node](./scripts/image_dataset_node.py) and modify it to your needs for other image datasets.
+
+To get an image from a dataset on the disk, you can start a `image_dataset.py` node as:
+```shell
+rosrun opendr_perception image_dataset_node.py
+```
+The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-o or --output_rgb_image_topic`: topic name to publish the data (default=`/opendr/dataset_image`)
+ - `-f or --fps FPS`: data fps (default=`10`)
+ - `-d or --dataset_path DATASET_PATH`: path to a dataset (default=`/MOT`)
+ - `-ks or --mot20_subsets_path MOT20_SUBSETS_PATH`: path to MOT20 subsets (default=`../../src/opendr/perception/object_tracking_2d/datasets/splits/nano_mot20.train`)
+
+### Point Cloud Dataset ROS Node
+
+The point cloud dataset node downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic,
+which is intended to be used with the [3D object detection node](#3d-object-detection-voxel-ros-node),
+as well as the [3D object tracking node](#3d-object-tracking-ab3dmot-ros-node).
+
+You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements,
+to use alongside other nodes and datasets.
+You can inspect [the node](./scripts/point_cloud_dataset_node.py) and modify it to your needs for other point cloud datasets.
+
+To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as:
+```shell
+rosrun opendr_perception point_cloud_dataset_node.py
+```
+The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-o or --output_point_cloud_topic`: topic name to publish the data (default=`/opendr/dataset_point_cloud`)
+ - `-f or --fps FPS`: data fps (default=`10`)
+ - `-d or --dataset_path DATASET_PATH`: path to a dataset, if it does not exist, nano KITTI dataset will be downloaded there (default=`/KITTI/opendr_nano_kitti`)
+ - `-ks or --kitti_subsets_path KITTI_SUBSETS_PATH`: path to KITTI subsets, used only if a KITTI dataset is downloaded (default=`../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets`)
diff --git a/projects/opendr_ws/src/perception/include/perception/.keep b/projects/opendr_ws/src/opendr_perception/include/opendr_perception/.keep
similarity index 100%
rename from projects/opendr_ws/src/perception/include/perception/.keep
rename to projects/opendr_ws/src/opendr_perception/include/opendr_perception/.keep
diff --git a/projects/opendr_ws/src/perception/package.xml b/projects/opendr_ws/src/opendr_perception/package.xml
similarity index 94%
rename from projects/opendr_ws/src/perception/package.xml
rename to projects/opendr_ws/src/opendr_perception/package.xml
index 7b7c0e00c9..b9a89f0245 100644
--- a/projects/opendr_ws/src/perception/package.xml
+++ b/projects/opendr_ws/src/opendr_perception/package.xml
@@ -1,7 +1,7 @@
- perception
- 1.1.1
+ opendr_perception
+ 2.0.0OpenDR's ROS nodes for perception packageOpenDR Project CoordinatorApache License v2.0
diff --git a/projects/opendr_ws/src/perception/scripts/audiovisual_emotion_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/audiovisual_emotion_recognition_node.py
old mode 100644
new mode 100755
similarity index 67%
rename from projects/opendr_ws/src/perception/scripts/audiovisual_emotion_recognition.py
rename to projects/opendr_ws/src/opendr_perception/scripts/audiovisual_emotion_recognition_node.py
index c4fe3e126a..91bff9b3c8
--- a/projects/opendr_ws/src/perception/scripts/audiovisual_emotion_recognition.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/audiovisual_emotion_recognition_node.py
@@ -19,6 +19,7 @@
import numpy as np
import torch
import librosa
+import cv2
import rospy
import message_filters
@@ -35,28 +36,25 @@
class AudiovisualEmotionNode:
def __init__(self, input_video_topic="/usb_cam/image_raw", input_audio_topic="/audio/audio",
- annotations_topic="/opendr/audiovisual_emotion", buffer_size=3.6, device="cuda"):
+ output_emotions_topic="/opendr/audiovisual_emotion", buffer_size=3.6, device="cuda"):
"""
Creates a ROS Node for audiovisual emotion recognition
:param input_video_topic: Topic from which we are reading the input video. Expects detected face of size 224x224
:type input_video_topic: str
:param input_audio_topic: Topic from which we are reading the input audio
:type input_audio_topic: str
- :param annotations_topic: Topic to which we are publishing the predicted class
- :type annotations_topic: str
+ :param output_emotions_topic: Topic to which we are publishing the predicted class
+ :type output_emotions_topic: str
:param buffer_size: length of audio and video in sec
:type buffer_size: float
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
"""
- self.publisher = rospy.Publisher(annotations_topic, Classification2D, queue_size=10)
+ self.publisher = rospy.Publisher(output_emotions_topic, Classification2D, queue_size=10)
- video_sub = message_filters.Subscriber(input_video_topic, ROS_Image)
- audio_sub = message_filters.Subscriber(input_audio_topic, AudioData)
- # synchronize video and audio data topics
- ts = message_filters.ApproximateTimeSynchronizer([video_sub, audio_sub], 10, 0.1, allow_headerless=True)
- ts.registerCallback(self.callback)
+ self.input_video_topic = input_video_topic
+ self.input_audio_topic = input_audio_topic
self.bridge = ROSBridge()
@@ -77,21 +75,31 @@ def listen(self):
"""
Start the node and begin processing input data
"""
- rospy.init_node('opendr_audiovisualemotion_recognition', anonymous=True)
- rospy.loginfo("Audiovisual emotion recognition node started!")
+ rospy.init_node('opendr_audiovisual_emotion_recognition_node', anonymous=True)
+
+ video_sub = message_filters.Subscriber(self.input_video_topic, ROS_Image)
+ audio_sub = message_filters.Subscriber(self.input_audio_topic, AudioData)
+ # synchronize video and audio data topics
+ ts = message_filters.ApproximateTimeSynchronizer([video_sub, audio_sub], 10, 0.1, allow_headerless=True)
+ ts.registerCallback(self.callback)
+
+ rospy.loginfo("Audiovisual emotion recognition node started.")
rospy.spin()
def callback(self, image_data, audio_data):
"""
Callback that process the input data and publishes to the corresponding topics
- :param image_data: input image message, face image of size 224x224
+ :param image_data: input image message, face image
:type image_data: sensor_msgs.msg.Image
:param audio_data: input audio message, speech
:type audio_data: audio_common_msgs.msg.AudioData
"""
audio_data = np.reshape(np.frombuffer(audio_data.data, dtype=np.int16)/32768.0, (1, -1))
self.data_buffer = np.append(self.data_buffer, audio_data)
+
image_data = self.bridge.from_ros_image(image_data, encoding='bgr8').convert(format='channels_last')
+ image_data = cv2.resize(image_data, (224, 224))
+
self.video_buffer = np.append(self.video_buffer, np.expand_dims(image_data.data, 0), axis=0)
if self.data_buffer.shape[0] > 16000*self.buffer_size:
@@ -116,16 +124,35 @@ def callback(self, image_data, audio_data):
def select_distributed(m, n): return [i*n//m + n//(2*m) for i in range(m)]
-if __name__ == '__main__':
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
+if __name__ == '__main__':
parser = argparse.ArgumentParser()
- parser.add_argument('--video_topic', type=str, help='listen to video input data on this topic')
- parser.add_argument('--audio_topic', type=str, help='listen to audio input data on this topic')
- parser.add_argument('--buffer_size', type=float, default=3.6, help='size of the audio buffer in seconds')
+ parser.add_argument("-iv", "--input_video_topic", type=str, default="/usb_cam/image_raw",
+ help="Listen to video input data on this topic")
+ parser.add_argument("-ia", "--input_audio_topic", type=str, default="/audio/audio",
+ help="Listen to audio input data on this topic")
+ parser.add_argument("-o", "--output_emotions_topic", type=str, default="/opendr/audiovisual_emotion",
+ help="Topic name for output emotions recognition")
+ parser.add_argument("--device", type=str, default="cuda",
+ help="Device to use (cpu, cuda)", choices=["cuda", "cpu"])
+ parser.add_argument("--buffer_size", type=float, default=3.6,
+ help="Size of the audio buffer in seconds")
args = parser.parse_args()
- avnode = AudiovisualEmotionNode(input_video_topic=args.video_topic, input_audio_topic=args.audio_topic,
- annotations_topic="/opendr/audiovisual_emotion",
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
+ except:
+ print("Using CPU")
+ device = "cpu"
+
+ avnode = AudiovisualEmotionNode(input_video_topic=args.input_video_topic, input_audio_topic=args.input_audio_topic,
+ output_emotions_topic=args.output_emotions_topic,
buffer_size=args.buffer_size, device=device)
avnode.listen()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/face_detection_retinaface_node.py b/projects/opendr_ws/src/opendr_perception/scripts/face_detection_retinaface_node.py
new file mode 100755
index 0000000000..24665eb35c
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/face_detection_retinaface_node.py
@@ -0,0 +1,144 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import RetinaFaceLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class FaceDetectionNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_faces_annotated", detections_topic="/opendr/faces",
+ device="cuda", backbone="resnet"):
+ """
+ Creates a ROS Node for face detection with Retinaface.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no face detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: retinaface backbone, options are either 'mnet' or 'resnet',
+ where 'mnet' detects masked faces as well
+ :type backbone: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.face_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.face_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the face detector
+ self.face_detector = RetinaFaceLearner(backbone=backbone, device=device)
+ self.face_detector.download(path=".", verbose=True)
+ self.face_detector.load("retinaface_{}".format(backbone))
+ self.class_names = ["face", "masked_face"]
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_face_detection_retinaface_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Face detection RetinaFace node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run face detection
+ boxes = self.face_detector.infer(image)
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ if self.face_publisher is not None:
+ self.face_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with face detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.class_names)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_faces_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/faces")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone",
+ help="Retinaface backbone, options are either 'mnet' or 'resnet', where 'mnet' detects "
+ "masked faces as well",
+ type=str, default="resnet", choices=["resnet", "mnet"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ face_detection_node = FaceDetectionNode(device=device, backbone=args.backbone,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+ face_detection_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/face_recognition_node.py b/projects/opendr_ws/src/opendr_perception/scripts/face_recognition_node.py
new file mode 100755
index 0000000000..ebd0da3c18
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/face_recognition_node.py
@@ -0,0 +1,187 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import cv2
+import torch
+
+import rospy
+from std_msgs.msg import String
+from vision_msgs.msg import ObjectHypothesis
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.face_recognition import FaceRecognitionLearner
+from opendr.perception.object_detection_2d import RetinaFaceLearner
+from opendr.perception.object_detection_2d.datasets.transforms import BoundingBoxListToNumpyArray
+
+
+class FaceRecognitionNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_face_reco_annotated",
+ detections_topic="/opendr/face_recognition", detections_id_topic="/opendr/face_recognition_id",
+ database_path="./database", device="cuda", backbone="mobilefacenet"):
+ """
+ Creates a ROS Node for face recognition.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the recognized face information (if None,
+ no face recognition message is published)
+ :type detections_topic: str
+ :param detections_id_topic: Topic to which we are publishing the ID of the recognized person (if None,
+ no ID message is published)
+ :type detections_id_topic: str
+ :param device: Device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: Backbone network
+ :type backbone: str
+ :param database_path: Path of the directory where the images of the faces to be recognized are stored
+ :type database_path: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.face_publisher = rospy.Publisher(detections_topic, ObjectHypothesis, queue_size=1)
+ else:
+ self.face_publisher = None
+
+ if detections_id_topic is not None:
+ self.face_id_publisher = rospy.Publisher(detections_id_topic, String, queue_size=1)
+ else:
+ self.face_id_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the face recognizer
+ self.recognizer = FaceRecognitionLearner(device=device, mode='backbone_only', backbone=backbone)
+ self.recognizer.download(path=".")
+ self.recognizer.load(".")
+ self.recognizer.fit_reference(database_path, save_path=".", create_new=True)
+
+ # Initialize the face detector
+ self.face_detector = RetinaFaceLearner(backbone='mnet', device=device)
+ self.face_detector.download(path=".", verbose=True)
+ self.face_detector.load("retinaface_{}".format('mnet'))
+ self.class_names = ["face", "masked_face"]
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_face_recognition_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Face recognition node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+ # Get an OpenCV image back
+ image = image.opencv()
+
+ # Run face detection and recognition
+ if image is not None:
+ bounding_boxes = self.face_detector.infer(image)
+ if bounding_boxes:
+ bounding_boxes = BoundingBoxListToNumpyArray()(bounding_boxes)
+ boxes = bounding_boxes[:, :4]
+ for idx, box in enumerate(boxes):
+ (startX, startY, endX, endY) = int(box[0]), int(box[1]), int(box[2]), int(box[3])
+ frame = image[startY:endY, startX:endX]
+ result = self.recognizer.infer(frame)
+
+ # Publish face information and ID
+ if self.face_publisher is not None:
+ self.face_publisher.publish(self.bridge.to_ros_face(result))
+
+ if self.face_id_publisher is not None:
+ self.face_id_publisher.publish(self.bridge.to_ros_face_id(result))
+
+ if self.image_publisher is not None:
+ if result.description != 'Not found':
+ color = (0, 255, 0)
+ else:
+ color = (0, 0, 255)
+ # Annotate image with face detection/recognition boxes
+ cv2.rectangle(image, (startX, startY), (endX, endY), color, thickness=2)
+ cv2.putText(image, result.description, (startX, endY - 10), cv2.FONT_HERSHEY_SIMPLEX,
+ 1, color, 2, cv2.LINE_AA)
+
+ if self.image_publisher is not None:
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_face_reco_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/face_recognition")
+ parser.add_argument("-id", "--detections_id_topic", help="Topic name for detection ID messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/face_recognition_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to mobilefacenet",
+ type=str, default="mobilefacenet", choices=["mobilefacenet"])
+ parser.add_argument("--dataset_path",
+ help="Path of the directory where the images of the faces to be recognized are stored, "
+ "defaults to \"./database\"",
+ type=str, default="./database")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ face_recognition_node = FaceRecognitionNode(device=device, backbone=args.backbone, database_path=args.dataset_path,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ detections_id_topic=args.detections_id_topic)
+ face_recognition_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/facial_emotion_estimation_node.py b/projects/opendr_ws/src/opendr_perception/scripts/facial_emotion_estimation_node.py
new file mode 100644
index 0000000000..c2da6e55ce
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/facial_emotion_estimation_node.py
@@ -0,0 +1,213 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+import numpy as np
+import cv2
+from torchvision import transforms
+import PIL
+
+import rospy
+from std_msgs.msg import String
+from vision_msgs.msg import ObjectHypothesis
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.facial_expression_recognition import FacialEmotionLearner
+from opendr.perception.facial_expression_recognition import image_processing
+from opendr.perception.object_detection_2d import RetinaFaceLearner
+from opendr.perception.object_detection_2d.datasets.transforms import BoundingBoxListToNumpyArray
+
+INPUT_IMAGE_SIZE = (96, 96)
+INPUT_IMAGE_NORMALIZATION_MEAN = [0.0, 0.0, 0.0]
+INPUT_IMAGE_NORMALIZATION_STD = [1.0, 1.0, 1.0]
+
+
+class FacialEmotionEstimationNode:
+ def __init__(self,
+ face_detector_learner,
+ input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_emotion_estimation_annotated",
+ output_emotions_topic="/opendr/facial_emotion_estimation",
+ output_emotions_description_topic="/opendr/facial_emotion_estimation_description",
+ device="cuda"):
+ """
+ Creates a ROS Node for facial emotion estimation.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param output_emotions_topic: Topic to which we are publishing the facial emotion results
+ (if None, we are not publishing the info)
+ :type output_emotions_topic: str
+ :param output_emotions_description_topic: Topic to which we are publishing the description of the estimated
+ facial emotion (if None, we are not publishing the description)
+ :type output_emotions_description_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+
+ # Set up ROS topics and bridge
+ self.input_rgb_image_topic = input_rgb_image_topic
+ self.bridge = ROSBridge()
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if output_emotions_topic is not None:
+ self.hypothesis_publisher = rospy.Publisher(output_emotions_topic, ObjectHypothesis, queue_size=1)
+ else:
+ self.hypothesis_publisher = None
+
+ if output_emotions_description_topic is not None:
+ self.string_publisher = rospy.Publisher(output_emotions_description_topic, String, queue_size=1)
+ else:
+ self.string_publisher = None
+
+ self.face_detector = face_detector_learner
+
+ # Initialize the facial emotion estimator
+ self.facial_emotion_estimator = FacialEmotionLearner(device=device, batch_size=2,
+ ensemble_size=9,
+ name_experiment='esr_9')
+ self.facial_emotion_estimator.init_model(num_branches=9)
+
+ model_saved_path = self.facial_emotion_estimator.download(path=None, mode="pretrained")
+ self.facial_emotion_estimator.load(ensemble_size=9, path_to_saved_network=model_saved_path)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data
+ """
+ rospy.init_node('opendr_facial_emotion_estimation_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Facial emotion estimation node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ image = self.bridge.from_ros_image(data, encoding='bgr8').opencv()
+
+ emotion = None
+ # Run face detection and emotion estimation
+ if image is not None:
+ bounding_boxes = self.face_detector.infer(image)
+ if bounding_boxes:
+ bounding_boxes = BoundingBoxListToNumpyArray()(bounding_boxes)
+ boxes = bounding_boxes[:, :4]
+ for idx, box in enumerate(boxes):
+ (startX, startY, endX, endY) = int(box[0]), int(box[1]), int(box[2]), int(box[3])
+ face_crop = image[startY:endY, startX:endX]
+
+ # Preprocess detected face
+ input_face = _pre_process_input_image(face_crop)
+
+ # Recognize facial expression
+
+ emotion, affect = self.facial_emotion_estimator.infer(input_face)
+ # Converts from Tensor to ndarray
+ affect = np.array([a.cpu().detach().numpy() for a in affect])
+ affect = affect[0] # a numpy array of valence and arousal values
+ emotion = emotion[0] # the emotion class with confidence tensor
+
+ cv2.rectangle(image, (startX, startY), (endX, endY), (0, 255, 255), thickness=2)
+ cv2.putText(image, "Valence: %.2f" % affect[0], (startX, endY - 30), cv2.FONT_HERSHEY_SIMPLEX,
+ 0.5, (0, 255, 255), 1, cv2.LINE_AA)
+ cv2.putText(image, "Arousal: %.2f" % affect[1], (startX, endY - 15), cv2.FONT_HERSHEY_SIMPLEX,
+ 0.5, (0, 255, 255), 1, cv2.LINE_AA)
+ cv2.putText(image, emotion.description, (startX, endY), cv2.FONT_HERSHEY_SIMPLEX,
+ 0.5, (0, 255, 255), 1, cv2.LINE_AA)
+
+ if self.hypothesis_publisher is not None and emotion:
+ self.hypothesis_publisher.publish(self.bridge.to_ros_category(emotion))
+
+ if self.string_publisher is not None and emotion:
+ self.string_publisher.publish(self.bridge.to_ros_category_description(emotion))
+
+ if self.image_publisher is not None:
+ # Convert the annotated OpenDR image to ROS image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def _pre_process_input_image(image):
+ """
+ Pre-processes an image for ESR-9.
+ :param image: (ndarray)
+ :return: (ndarray) image
+ """
+
+ image = image_processing.resize(image, INPUT_IMAGE_SIZE)
+ image = PIL.Image.fromarray(image)
+ image = transforms.Normalize(mean=INPUT_IMAGE_NORMALIZATION_MEAN,
+ std=INPUT_IMAGE_NORMALIZATION_STD)(transforms.ToTensor()(image)).unsqueeze(0)
+
+ return image
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-i', '--input_rgb_image_topic', type=str, help='Topic name for input rgb image',
+ default='/usb_cam/image_raw')
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_emotion_estimation_annotated")
+ parser.add_argument("-e", "--output_emotions_topic", help="Topic name for output emotion",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/facial_emotion_estimation")
+ parser.add_argument('-m', '--output_emotions_description_topic',
+ help='Topic to which we are publishing the description of the estimated facial emotion',
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/facial_emotion_estimation_description")
+ parser.add_argument('-d', '--device', help='Device to use, either cpu or cuda',
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ print("GPU found.")
+ device = 'cuda'
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = 'cpu'
+ except:
+ print("Using CPU")
+ device = 'cpu'
+
+ # Initialize the face detector
+ face_detector = RetinaFaceLearner(backbone="resnet", device=device)
+ face_detector.download(path=".", verbose=True)
+ face_detector.load("retinaface_{}".format("resnet"))
+
+ facial_emotion_estimation_node = FacialEmotionEstimationNode(
+ face_detector,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ output_emotions_topic=args.output_emotions_topic,
+ output_emotions_description_topic=args.output_emotions_description_topic,
+ device=device)
+
+ facial_emotion_estimation_node.listen()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/fall_detection_node.py b/projects/opendr_ws/src/opendr_perception/scripts/fall_detection_node.py
new file mode 100755
index 0000000000..210d49f8e3
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/fall_detection_node.py
@@ -0,0 +1,183 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import argparse
+import torch
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.engine.target import BoundingBox, BoundingBoxList
+from opendr.perception.pose_estimation import get_bbox
+from opendr.perception.pose_estimation import LightweightOpenPoseLearner
+from opendr.perception.fall_detection import FallDetectorLearner
+
+
+class FallDetectionNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_fallen_annotated", detections_topic="/opendr/fallen",
+ device="cuda", num_refinement_stages=2, use_stride=False, half_precision=False):
+ """
+ Creates a ROS Node for rule-based fall detection based on Lightweight OpenPose.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no fall detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param num_refinement_stages: Specifies the number of pose estimation refinement stages are added on the
+ model's head, including the initial stage. Can be 0, 1 or 2, with more stages meaning slower and more accurate
+ inference
+ :type num_refinement_stages: int
+ :param use_stride: Whether to add a stride value in the model, which reduces accuracy but increases
+ inference speed
+ :type use_stride: bool
+ :param half_precision: Enables inference using half (fp16) precision instead of single (fp32) precision.
+ Valid only for GPU-based inference
+ :type half_precision: bool
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.fall_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.fall_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the pose estimation learner
+ self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=num_refinement_stages,
+ mobilenet_use_stride=use_stride,
+ half_precision=half_precision)
+ self.pose_estimator.download(path=".", verbose=True)
+ self.pose_estimator.load("openpose_default")
+
+ # Initialize the fall detection learner
+ self.fall_detector = FallDetectorLearner(self.pose_estimator)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_fall_detection_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Fall detection node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run fall detection
+ detections = self.fall_detector.infer(image)
+
+ # Get an OpenCV image back
+ image = image.opencv()
+
+ bboxes = BoundingBoxList([])
+ fallen_pose_id = 0
+ for detection in detections:
+ fallen = detection[0].data
+
+ if fallen == 1:
+ pose = detection[2]
+ x, y, w, h = get_bbox(pose)
+ if self.image_publisher is not None:
+ # Paint person bounding box inferred from pose
+ color = (0, 0, 255)
+ cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
+ cv2.putText(image, "Fallen person", (x, y + h - 10), cv2.FONT_HERSHEY_SIMPLEX,
+ 1, color, 2, cv2.LINE_AA)
+
+ if self.fall_publisher is not None:
+ # Convert detected boxes to ROS type and add to list
+ bboxes.data.append(BoundingBox(left=x, top=y, width=w, height=h, name=fallen_pose_id))
+ fallen_pose_id += 1
+
+ if self.fall_publisher is not None:
+ if len(bboxes) > 0:
+ self.fall_publisher.publish(self.bridge.to_ros_boxes(bboxes))
+
+ if self.image_publisher is not None:
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_fallen_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/fallen")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--accelerate", help="Enables acceleration flags (e.g., stride)", default=False,
+ action="store_true")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ if args.accelerate:
+ stride = True
+ stages = 0
+ half_prec = True
+ else:
+ stride = False
+ stages = 2
+ half_prec = False
+
+ fall_detection_node = FallDetectionNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ num_refinement_stages=stages, use_stride=stride, half_precision=half_prec)
+ fall_detection_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/perception/scripts/heart_anomaly_detection.py b/projects/opendr_ws/src/opendr_perception/scripts/heart_anomaly_detection_node.py
similarity index 55%
rename from projects/opendr_ws/src/perception/scripts/heart_anomaly_detection.py
rename to projects/opendr_ws/src/opendr_perception/scripts/heart_anomaly_detection_node.py
index 4e72471b9d..98001abcdb 100755
--- a/projects/opendr_ws/src/perception/scripts/heart_anomaly_detection.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/heart_anomaly_detection_node.py
@@ -14,33 +14,36 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import rospy
+import argparse
import torch
+
+import rospy
from vision_msgs.msg import Classification2D
-import argparse
from std_msgs.msg import Float32MultiArray
+
from opendr_bridge import ROSBridge
from opendr.perception.heart_anomaly_detection import GatedRecurrentUnitLearner, AttentionNeuralBagOfFeatureLearner
class HeartAnomalyNode:
- def __init__(self, input_topic="/ecg/ecg", prediction_topic="/opendr/heartanomaly", device="cuda", model='anbof'):
+ def __init__(self, input_ecg_topic="/ecg/ecg", output_heart_anomaly_topic="/opendr/heart_anomaly",
+ device="cuda", model="anbof"):
"""
Creates a ROS Node for heart anomaly (atrial fibrillation) detection from ecg data
- :param input_topic: Topic from which we are reading the input array data
- :type input_topic: str
- :param prediction_topic: Topic to which we are publishing the predicted class
- :type prediction_topic: str
+ :param input_ecg_topic: Topic from which we are reading the input array data
+ :type input_ecg_topic: str
+ :param output_heart_anomaly_topic: Topic to which we are publishing the predicted class
+ :type output_heart_anomaly_topic: str
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
:param model: model to use: anbof or gru
:type model: str
"""
- self.publisher = rospy.Publisher(prediction_topic, Classification2D, queue_size=10)
+ self.publisher = rospy.Publisher(output_heart_anomaly_topic, Classification2D, queue_size=10)
- rospy.Subscriber(input_topic, Float32MultiArray, self.callback)
+ rospy.Subscriber(input_ecg_topic, Float32MultiArray, self.callback)
self.bridge = ROSBridge()
@@ -48,7 +51,6 @@ def __init__(self, input_topic="/ecg/ecg", prediction_topic="/opendr/heartanomal
self.channels = 1
self.series_length = 9000
- # Initialize the gesture recognition
if model == 'gru':
self.learner = GatedRecurrentUnitLearner(in_channels=self.channels, series_length=self.series_length,
n_class=4, device=device)
@@ -63,15 +65,15 @@ def listen(self):
"""
Start the node and begin processing input data
"""
- rospy.init_node('opendr_heart_anomaly_detection', anonymous=True)
- rospy.loginfo("Heart anomaly detection node started!")
+ rospy.init_node('opendr_heart_anomaly_detection_node', anonymous=True)
+ rospy.loginfo("Heart anomaly detection node started.")
rospy.spin()
def callback(self, msg_data):
"""
Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: std_msgs.msg.Float32MultiArray
+ :param msg_data: input message
+ :type msg_data: std_msgs.msg.Float32MultiArray
"""
# Convert Float32MultiArray to OpenDR Timeseries
data = self.bridge.from_rosarray_to_timeseries(msg_data, self.channels, self.series_length)
@@ -83,17 +85,35 @@ def callback(self, msg_data):
ros_class = self.bridge.from_category_to_rosclass(class_pred)
self.publisher.publish(ros_class)
+
if __name__ == '__main__':
- # Select the device for running
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_ecg_topic", type=str, default="/ecg/ecg",
+ help="listen to input ECG data on this topic")
+ parser.add_argument("-o", "--output_heart_anomaly_topic", type=str, default="/opendr/heart_anomaly",
+ help="Topic name for heart anomaly detection topic")
+ parser.add_argument("--device", type=str, default="cuda", help="Device to use (cpu, cuda)",
+ choices=["cuda", "cpu"])
+ parser.add_argument("--model", type=str, default="anbof", help="model to be used for prediction: anbof or gru",
+ choices=["anbof", "gru"])
+
+ args = parser.parse_args()
+
try:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
except:
- device = 'cpu'
+ print("Using CPU")
+ device = "cpu"
- parser = argparse.ArgumentParser()
- parser.add_argument('input_topic', type=str, help='listen to input data on this topic')
- parser.add_argument('model', type=str, help='model to be used for prediction: anbof or gru')
- args = parser.parse_args()
+ heart_anomaly_detection_node = HeartAnomalyNode(input_ecg_topic=args.input_ecg_topic,
+ output_heart_anomaly_topic=args.output_heart_anomaly_topic,
+ model=args.model, device=device)
- gesture_node = HeartAnomalyNode(input_topic=args.input_topic, model=args.model, device=device)
- gesture_node.listen()
+ heart_anomaly_detection_node.listen()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/hr_pose_estimation_node.py b/projects/opendr_ws/src/opendr_perception/scripts/hr_pose_estimation_node.py
new file mode 100755
index 0000000000..0a471b224e
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/hr_pose_estimation_node.py
@@ -0,0 +1,164 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rospy
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge.msg import OpenDRPose2D
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.pose_estimation import draw
+from opendr.perception.pose_estimation import HighResolutionPoseEstimationLearner
+
+
+class PoseEstimationNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_pose_annotated", detections_topic="/opendr/poses", device="cuda",
+ num_refinement_stages=2, use_stride=False, half_precision=False):
+ """
+ Creates a ROS Node for high resolution pose estimation with HR Pose Estimation.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no pose detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param num_refinement_stages: Specifies the number of pose estimation refinement stages are added on the
+ model's head, including the initial stage. Can be 0, 1 or 2, with more stages meaning slower and more accurate
+ inference
+ :type num_refinement_stages: int
+ :param use_stride: Whether to add a stride value in the model, which reduces accuracy but increases
+ inference speed
+ :type use_stride: bool
+ :param half_precision: Enables inference using half (fp16) precision instead of single (fp32) precision.
+ Valid only for GPU-based inference
+ :type half_precision: bool
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.pose_publisher = rospy.Publisher(detections_topic, OpenDRPose2D, queue_size=1)
+ else:
+ self.pose_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the high resolution pose estimation learner
+ self.pose_estimator = HighResolutionPoseEstimationLearner(device=device, num_refinement_stages=num_refinement_stages,
+ mobilenet_use_stride=use_stride,
+ half_precision=half_precision)
+ self.pose_estimator.download(path=".", verbose=True)
+ self.pose_estimator.load("openpose_default")
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_hr_pose_estimation_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Pose estimation node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run pose estimation
+ poses = self.pose_estimator.infer(image)
+
+ # Publish detections in ROS message
+ if self.pose_publisher is not None:
+ for pose in poses:
+ if pose.id is None: # Temporary fix for pose not having id
+ pose.id = -1
+ # Convert OpenDR pose to ROS pose message using bridge and publish it
+ self.pose_publisher.publish(self.bridge.to_ros_pose(pose))
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with poses
+ for pose in poses:
+ draw(image, pose)
+ # Convert the annotated OpenDR image to ROS image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_pose_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/poses")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--accelerate", help="Enables acceleration flags (e.g., stride)", default=False,
+ action="store_true")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ if args.accelerate:
+ stride = True
+ stages = 0
+ half_prec = True
+ else:
+ stride = False
+ stages = 2
+ half_prec = False
+
+ pose_estimator_node = PoseEstimationNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ num_refinement_stages=stages, use_stride=stride, half_precision=half_prec)
+ pose_estimator_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/image_dataset_node.py b/projects/opendr_ws/src/opendr_perception/scripts/image_dataset_node.py
new file mode 100755
index 0000000000..575c1c4dce
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/image_dataset_node.py
@@ -0,0 +1,108 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import os
+import rospy
+import time
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+from opendr.engine.datasets import DatasetIterator
+from opendr.perception.object_tracking_2d import MotDataset, RawMotDatasetIterator
+
+
+class ImageDatasetNode:
+ def __init__(
+ self,
+ dataset: DatasetIterator,
+ output_rgb_image_topic="/opendr/dataset_image",
+ data_fps=30,
+ ):
+ """
+ Creates a ROS Node for publishing dataset images
+ """
+
+ self.dataset = dataset
+ # Initialize OpenDR ROSBridge object
+ self.bridge = ROSBridge()
+ self.delay = 1.0 / data_fps
+
+ self.output_image_publisher = rospy.Publisher(
+ output_rgb_image_topic, ROS_Image, queue_size=10
+ )
+
+ def start(self):
+ rospy.loginfo("Timing images")
+ i = 0
+ while not rospy.is_shutdown():
+ image = self.dataset[i % len(self.dataset)][0] # Dataset should have an (Image, Target) pair as elements
+ message = self.bridge.to_ros_image(
+ image, encoding="bgr8"
+ )
+ self.output_image_publisher.publish(message)
+
+ time.sleep(self.delay)
+ i += 1
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-d", "--dataset_path", help="Path to a dataset",
+ type=str, default="MOT")
+ parser.add_argument(
+ "-ks", "--mot20_subsets_path", help="Path to mot20 subsets",
+ type=str, default=os.path.join(
+ "..", "..", "src", "opendr", "perception", "object_tracking_2d",
+ "datasets", "splits", "nano_mot20.train"
+ )
+ )
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name to publish the data",
+ type=str, default="/opendr/dataset_image")
+ parser.add_argument("-f", "--fps", help="Data FPS",
+ type=float, default=30)
+ args = parser.parse_args()
+
+ dataset_path = args.dataset_path
+ mot20_subsets_path = args.mot20_subsets_path
+ output_rgb_image_topic = args.output_rgb_image_topic
+ data_fps = args.fps
+
+ if not os.path.exists(dataset_path):
+ dataset_path = MotDataset.download_nano_mot20(
+ "MOT", True
+ ).path
+
+ dataset = RawMotDatasetIterator(
+ dataset_path,
+ {
+ "mot20": mot20_subsets_path
+ },
+ scan_labels=False
+ )
+
+ rospy.init_node("opendr_image_dataset_node", anonymous=True)
+
+ dataset_node = ImageDatasetNode(
+ dataset,
+ output_rgb_image_topic=output_rgb_image_topic,
+ data_fps=data_fps,
+ )
+
+ rospy.loginfo("Image dataset node started.")
+ dataset_node.start()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/perception/scripts/landmark_based_facial_expression_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/landmark_based_facial_expression_recognition_node.py
old mode 100644
new mode 100755
similarity index 66%
rename from projects/opendr_ws/src/perception/scripts/landmark_based_facial_expression_recognition.py
rename to projects/opendr_ws/src/opendr_perception/scripts/landmark_based_facial_expression_recognition_node.py
index a6b0c2188f..96a274f555
--- a/projects/opendr_ws/src/perception/scripts/landmark_based_facial_expression_recognition.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/landmark_based_facial_expression_recognition_node.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
+import argparse
import rospy
import torch
import numpy as np
@@ -29,14 +29,14 @@
class LandmarkFacialExpressionRecognitionNode:
- def __init__(self, input_image_topic="/usb_cam/image_raw",
- output_category_topic="/opendr/landmark_based_expression_recognition",
- output_category_description_topic="/opendr/landmark_based_expression_recognition_description",
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_category_topic="/opendr/landmark_expression_recognition",
+ output_category_description_topic="/opendr/landmark_expression_recognition_description",
device="cpu", model='pstbln_afew', shape_predictor='./predictor_path'):
"""
- Creates a ROS Node for pose detection
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
+ Creates a ROS Node for landmark-based facial expression recognition.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
:param output_category_topic: Topic to which we are publishing the recognized facial expression category info
(if None, we are not publishing the info)
:type output_category_topic: str
@@ -53,6 +53,8 @@ def __init__(self, input_image_topic="/usb_cam/image_raw",
"""
# Set up ROS topics and bridge
+ self.input_rgb_image_topic = input_rgb_image_topic
+ self.bridge = ROSBridge()
if output_category_topic is not None:
self.hypothesis_publisher = rospy.Publisher(output_category_topic, ObjectHypothesis, queue_size=10)
@@ -64,9 +66,6 @@ def __init__(self, input_image_topic="/usb_cam/image_raw",
else:
self.string_publisher = None
- self.input_image_topic = input_image_topic
- self.bridge = ROSBridge()
-
# Initialize the landmark-based facial expression recognition
if model == 'pstbln_ck+':
num_point = 303
@@ -90,9 +89,9 @@ def listen(self):
"""
Start the node and begin processing input data
"""
- rospy.init_node('opendr_landmark_based_facial_expression_recognition', anonymous=True)
- rospy.Subscriber(self.input_image_topic, ROS_Image, self.callback)
- rospy.loginfo("landmark-based facial expression recognition node started!")
+ rospy.init_node('opendr_landmark_based_facial_expression_recognition_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback)
+ rospy.loginfo("Landmark-based facial expression recognition node started.")
rospy.spin()
def callback(self, data):
@@ -134,16 +133,42 @@ def _landmark2numpy(landmarks):
if __name__ == '__main__':
- # Select the device for running the
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_category_topic", help="Topic name for output recognized category",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/landmark_expression_recognition")
+ parser.add_argument("-d", "--output_category_description_topic", help="Topic name for category description",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/landmark_expression_recognition_description")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model to use, either 'pstbln_ck+', 'pstbln_casia', 'pstbln_afew'",
+ type=str, default="pstbln_afew", choices=['pstbln_ck+', 'pstbln_casia', 'pstbln_afew'])
+ parser.add_argument("-s", "--shape_predictor", help="Shape predictor (landmark_extractor) to use",
+ type=str, default='./predictor_path')
+ args = parser.parse_args()
+
try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
print("GPU not found. Using CPU instead.")
- device = 'cpu'
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
except:
- device = 'cpu'
-
- pose_estimation_node = LandmarkFacialExpressionRecognitionNode(device=device)
- pose_estimation_node.listen()
+ print("Using CPU.")
+ device = "cpu"
+
+ landmark_expression_estimation_node = \
+ LandmarkFacialExpressionRecognitionNode(
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_category_topic=args.output_category_topic,
+ output_category_description_topic=args.output_category_description_topic,
+ device=device, model=args.model,
+ shape_predictor=args.shape_predictor)
+ landmark_expression_estimation_node.listen()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_centernet_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_centernet_node.py
new file mode 100755
index 0000000000..4e64663ff1
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_centernet_node.py
@@ -0,0 +1,139 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import CenterNetDetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionCenterNetNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated", detections_topic="/opendr/objects",
+ device="cuda", backbone="resnet50_v1b"):
+ """
+ Creates a ROS Node for object detection with Centernet.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: backbone network
+ :type backbone: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the object detector
+ self.object_detector = CenterNetDetectorLearner(backbone=backbone, device=device)
+ self.object_detector.download(path=".", verbose=True)
+ self.object_detector.load("centernet_default")
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_detection_2d_centernet_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Object detection 2D Centernet node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.45, keep_size=False)
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ if self.object_publisher is not None:
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to \"resnet50_v1b\"",
+ type=str, default="resnet50_v1b", choices=["resnet50_v1b"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_centernet_node = ObjectDetectionCenterNetNode(device=device, backbone=args.backbone,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+ object_detection_centernet_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_detr_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_detr_node.py
new file mode 100755
index 0000000000..fc11461891
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_detr_node.py
@@ -0,0 +1,232 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import argparse
+import torch
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import DetrLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionDetrNode:
+ def __init__(
+ self,
+ input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated",
+ detections_topic="/opendr/objects",
+ device="cuda",
+ ):
+ """
+ Creates a ROS Node for object detection with DETR.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROSBridge()
+
+ self.class_names = [
+ "N/A",
+ "person",
+ "bicycle",
+ "car",
+ "motorcycle",
+ "airplane",
+ "bus",
+ "train",
+ "truck",
+ "boat",
+ "traffic light",
+ "fire hydrant",
+ "N/A",
+ "stop sign",
+ "parking meter",
+ "bench",
+ "bird",
+ "cat",
+ "dog",
+ "horse",
+ "sheep",
+ "cow",
+ "elephant",
+ "bear",
+ "zebra",
+ "giraffe",
+ "N/A",
+ "backpack",
+ "umbrella",
+ "N/A",
+ "N/A",
+ "handbag",
+ "tie",
+ "suitcase",
+ "frisbee",
+ "skis",
+ "snowboard",
+ "sports ball",
+ "kite",
+ "baseball bat",
+ "baseball glove",
+ "skateboard",
+ "surfboard",
+ "tennis racket",
+ "bottle",
+ "N/A",
+ "wine glass",
+ "cup",
+ "fork",
+ "knife",
+ "spoon",
+ "bowl",
+ "banana",
+ "apple",
+ "sandwich",
+ "orange",
+ "broccoli",
+ "carrot",
+ "hot dog",
+ "pizza",
+ "donut",
+ "cake",
+ "chair",
+ "couch",
+ "potted plant",
+ "bed",
+ "N/A",
+ "dining table",
+ "N/A",
+ "N/A",
+ "toilet",
+ "N/A",
+ "tv",
+ "laptop",
+ "mouse",
+ "remote",
+ "keyboard",
+ "cell phone",
+ "microwave",
+ "oven",
+ "toaster",
+ "sink",
+ "refrigerator",
+ "N/A",
+ "book",
+ "clock",
+ "vase",
+ "scissors",
+ "teddy bear",
+ "hair drier",
+ "toothbrush",
+ ]
+
+ # Initialize the detection estimation
+ self.detr_learner = DetrLearner(device=device)
+ self.detr_learner.download(path=".", verbose=True)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_detection_2d_detr_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Object detection 2D DETR node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding="bgr8")
+
+ # Run object detection
+ boxes = self.detr_learner.infer(image)
+
+ # Get an OpenCV image back
+ image = image.opencv()
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_bounding_box_list(boxes) # Convert to ROS bounding_box_list
+ if self.object_publisher is not None:
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.class_names)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=str, default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=str, default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_detr_node = ObjectDetectionDetrNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+ object_detection_detr_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_gem_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_gem_node.py
new file mode 100755
index 0000000000..2a6243a30d
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_gem_node.py
@@ -0,0 +1,272 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import rospy
+import torch
+import message_filters
+import cv2
+import numpy as np
+import argparse
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+from opendr.perception.object_detection_2d import GemLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+from opendr.engine.data import Image
+
+
+class ObjectDetectionGemNode:
+ def __init__(
+ self,
+ input_rgb_image_topic="/camera/color/image_raw",
+ input_infra_image_topic="/camera/infra/image_raw",
+ output_rgb_image_topic="/opendr/rgb_image_objects_annotated",
+ output_infra_image_topic="/opendr/infra_image_objects_annotated",
+ detections_topic="/opendr/objects",
+ device="cuda",
+ pts_rgb=None,
+ pts_infra=None,
+ ):
+ """
+ Creates a ROS Node for object detection with GEM
+ :param input_rgb_image_topic: Topic from which we are reading the input rgb image
+ :type input_rgb_image_topic: str
+ :param input_infra_image_topic: Topic from which we are reading the input infrared image
+ :type: input_infra_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated rgb image (if None, we are not
+ publishing annotated image)
+ :type output_rgb_image_topic: str
+ :param output_infra_image_topic: Topic to which we are publishing the annotated infrared image (if None, we are not
+ publishing annotated image)
+ :type output_infra_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, we are
+ not publishing annotations)
+ :type detections_topic: str
+ :param device: Device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param pts_rgb: Point on the rgb image that define alignment with the infrared image. These are camera
+ specific and can be obtained using get_color_infra_alignment.py which is located in the
+ opendr/perception/object_detection2d/utils module.
+ :type pts_rgb: {list, numpy.ndarray}
+ :param pts_infra: Points on the infrared image that define alignment with rgb image. These are camera specific
+ and can be obtained using get_rgb_infra_alignment.py which is located in the
+ opendr/perception/object_detection2d/utils module.
+ :type pts_infra: {list, numpy.ndarray}
+ """
+ rospy.init_node("opendr_object_detection_2d_gem_node", anonymous=True)
+ if output_rgb_image_topic is not None:
+ self.rgb_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=10)
+ else:
+ self.rgb_publisher = None
+ if output_infra_image_topic is not None:
+ self.ir_publisher = rospy.Publisher(output_infra_image_topic, ROS_Image, queue_size=10)
+ else:
+ self.ir_publisher = None
+
+ if detections_topic is not None:
+ self.detection_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=10)
+ else:
+ self.detection_publisher = None
+ if pts_infra is None:
+ pts_infra = np.array(
+ [
+ [478, 248],
+ [465, 338],
+ [458, 325],
+ [468, 256],
+ [341, 240],
+ [335, 310],
+ [324, 321],
+ [311, 383],
+ [434, 365],
+ [135, 384],
+ [67, 257],
+ [167, 206],
+ [124, 131],
+ [364, 276],
+ [424, 269],
+ [277, 131],
+ [41, 310],
+ [202, 320],
+ [188, 318],
+ [188, 308],
+ [196, 241],
+ [499, 317],
+ [311, 164],
+ [220, 216],
+ [435, 352],
+ [213, 363],
+ [390, 364],
+ [212, 368],
+ [390, 370],
+ [467, 324],
+ [415, 364],
+ ]
+ )
+ rospy.logwarn(
+ "\nUsing default calibration values for pts_infra!" +
+ "\nThese are probably incorrect." +
+ "\nThe correct values for pts_infra can be found by running get_color_infra_alignment.py." +
+ "\nThis file is located in the opendr/perception/object_detection2d/utils module."
+ )
+ if pts_rgb is None:
+ pts_rgb = np.array(
+ [
+ [910, 397],
+ [889, 572],
+ [874, 552],
+ [891, 411],
+ [635, 385],
+ [619, 525],
+ [603, 544],
+ [576, 682],
+ [810, 619],
+ [216, 688],
+ [90, 423],
+ [281, 310],
+ [193, 163],
+ [684, 449],
+ [806, 431],
+ [504, 170],
+ [24, 538],
+ [353, 552],
+ [323, 550],
+ [323, 529],
+ [344, 387],
+ [961, 533],
+ [570, 233],
+ [392, 336],
+ [831, 610],
+ [378, 638],
+ [742, 630],
+ [378, 648],
+ [742, 640],
+ [895, 550],
+ [787, 630],
+ ]
+ )
+ rospy.logwarn(
+ "\nUsing default calibration values for pts_rgb!" +
+ "\nThese are probably incorrect." +
+ "\nThe correct values for pts_rgb can be found by running get_color_infra_alignment.py." +
+ "\nThis file is located in the opendr/perception/object_detection2d/utils module."
+ )
+ # Object classes
+ self.classes = ["N/A", "chair", "cycle", "bin", "laptop", "drill", "rocker"]
+
+ # Estimating Homography matrix for aligning infra with RGB
+ self.h, status = cv2.findHomography(pts_infra, pts_rgb)
+
+ self.bridge = ROSBridge()
+
+ # Initialize the detection estimation
+ model_backbone = "resnet50"
+
+ self.gem_learner = GemLearner(
+ backbone=model_backbone,
+ num_classes=7,
+ device=device,
+ )
+ self.gem_learner.fusion_method = "sc_avg"
+ self.gem_learner.download(path=".", verbose=True)
+
+ # Subscribers
+ msg_rgb = message_filters.Subscriber(input_rgb_image_topic, ROS_Image, queue_size=1, buff_size=10000000)
+ msg_ir = message_filters.Subscriber(input_infra_image_topic, ROS_Image, queue_size=1, buff_size=10000000)
+
+ sync = message_filters.TimeSynchronizer([msg_rgb, msg_ir], 1)
+ sync.registerCallback(self.callback)
+ rospy.loginfo("GEM node initialized.")
+
+ def listen(self):
+ """
+ Start the node and begin processing input data
+ """
+ rospy.loginfo("Object detection 2D GEM node started.")
+ rospy.spin()
+
+ def callback(self, msg_rgb, msg_ir):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param msg_rgb: input rgb image message
+ :type msg_rgb: sensor_msgs.msg.Image
+ :param msg_ir: input infrared image message
+ :type msg_ir: sensor_msgs.msg.Image
+ """
+ # Convert images to OpenDR standard
+ image_rgb = self.bridge.from_ros_image(msg_rgb).opencv()
+ image_ir_raw = self.bridge.from_ros_image(msg_ir, "bgr8").opencv()
+ image_ir = cv2.warpPerspective(image_ir_raw, self.h, (image_rgb.shape[1], image_rgb.shape[0]))
+
+ # Perform inference on images
+ boxes, w_sensor1, _ = self.gem_learner.infer(image_rgb, image_ir)
+
+ # Annotate image and publish results:
+ if self.detection_publisher is not None:
+ ros_detection = self.bridge.to_ros_bounding_box_list(boxes)
+ self.detection_publisher.publish(ros_detection)
+
+ if self.rgb_publisher is not None:
+ plot_rgb = draw_bounding_boxes(image_rgb, boxes, class_names=self.classes)
+ message = self.bridge.to_ros_image(Image(np.uint8(plot_rgb)))
+ self.rgb_publisher.publish(message)
+ if self.ir_publisher is not None:
+ plot_ir = draw_bounding_boxes(image_ir, boxes, class_names=self.classes)
+ message = self.bridge.to_ros_image(Image(np.uint8(plot_ir)))
+ self.ir_publisher.publish(message)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-ic", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/camera/color/image_raw")
+ parser.add_argument("-ii", "--input_infra_image_topic", help="Topic name for input infrared image",
+ type=str, default="/camera/infra/image_raw")
+ parser.add_argument("-oc", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/rgb_image_objects_annotated")
+ parser.add_argument("-oi", "--output_infra_image_topic", help="Topic name for output annotated infrared image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/infra_image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help='Device to use, either "cpu" or "cuda", defaults to "cuda"',
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ detection_estimation_node = ObjectDetectionGemNode(
+ device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ input_infra_image_topic=args.input_infra_image_topic,
+ output_infra_image_topic=args.output_infra_image_topic,
+ detections_topic=args.detections_topic,
+ )
+ detection_estimation_node.listen()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_nanodet_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_nanodet_node.py
new file mode 100755
index 0000000000..c2304ce6ff
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_nanodet_node.py
@@ -0,0 +1,139 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import NanodetLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionNanodetNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated", detections_topic="/opendr/objects",
+ device="cuda", model="plus_m_1.5x_416"):
+ """
+ Creates a ROS Node for object detection with Nanodet.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model: the name of the model of which we want to load the config file
+ :type model: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the object detector
+ self.object_detector = NanodetLearner(model_to_use=model, device=device)
+ self.object_detector.download(path=".", mode="pretrained", verbose=True)
+ self.object_detector.load("./nanodet_{}".format(model))
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_detection_2d_nanodet_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Object detection 2D Nanodet node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.35)
+
+ # Get an OpenCV image back
+ image = image.opencv()
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ if self.object_publisher is not None:
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model that config file will be used", type=str, default="plus_m_1.5x_416")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_nanodet_node = ObjectDetectionNanodetNode(device=device, model=args.model,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+ object_detection_nanodet_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_ssd_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_ssd_node.py
new file mode 100755
index 0000000000..1e189bcd60
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_ssd_node.py
@@ -0,0 +1,166 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import SingleShotDetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+from opendr.perception.object_detection_2d import Seq2SeqNMSLearner, SoftNMS, FastNMS, ClusterNMS
+
+
+class ObjectDetectionSSDNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated", detections_topic="/opendr/objects",
+ device="cuda", backbone="vgg16_atrous", nms_type='default'):
+ """
+ Creates a ROS Node for object detection with SSD.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: backbone network
+ :type backbone: str
+ :param nms_type: type of NMS method
+ :type nms_type: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the object detector
+ self.object_detector = SingleShotDetectorLearner(backbone=backbone, device=device)
+ self.object_detector.download(path=".", verbose=True)
+ self.object_detector.load("ssd_default_person")
+ self.custom_nms = None
+
+ # Initialize NMS if selected
+ if nms_type == 'seq2seq-nms':
+ self.custom_nms = Seq2SeqNMSLearner(fmod_map_type='EDGEMAP', iou_filtering=0.8,
+ app_feats='fmod', device=device)
+ self.custom_nms.download(model_name='seq2seq_pets_jpd_fmod', path='.')
+ self.custom_nms.load('./seq2seq_pets_jpd_fmod/', verbose=True)
+ rospy.loginfo("Object Detection 2D SSD node seq2seq-nms initialized.")
+ elif nms_type == 'soft-nms':
+ self.custom_nms = SoftNMS(nms_thres=0.45, device=device)
+ rospy.loginfo("Object Detection 2D SSD node soft-nms initialized.")
+ elif nms_type == 'fast-nms':
+ self.custom_nms = FastNMS(device=device)
+ rospy.loginfo("Object Detection 2D SSD node fast-nms initialized.")
+ elif nms_type == 'cluster-nms':
+ self.custom_nms = ClusterNMS(device=device)
+ rospy.loginfo("Object Detection 2D SSD node cluster-nms initialized.")
+ else:
+ rospy.loginfo("Object Detection 2D SSD node using default NMS.")
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_detection_2d_ssd_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Object detection 2D SSD node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.45, keep_size=False, custom_nms=self.custom_nms)
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ if self.object_publisher is not None:
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to vgg16_atrous",
+ type=str, default="vgg16_atrous", choices=["vgg16_atrous"])
+ parser.add_argument("--nms_type", help="Non-Maximum Suppression type, defaults to \"default\", options are "
+ "\"seq2seq-nms\", \"soft-nms\", \"fast-nms\", \"cluster-nms\"",
+ type=str, default="default",
+ choices=["default", "seq2seq-nms", "soft-nms", "fast-nms", "cluster-nms"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_ssd_node = ObjectDetectionSSDNode(device=device, backbone=args.backbone, nms_type=args.nms_type,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+ object_detection_ssd_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov3_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov3_node.py
new file mode 100755
index 0000000000..2b29cc0597
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov3_node.py
@@ -0,0 +1,140 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import YOLOv3DetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionYOLONode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated", detections_topic="/opendr/objects",
+ device="cuda", backbone="darknet53"):
+ """
+ Creates a ROS Node for object detection with YOLOV3.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: backbone network
+ :type backbone: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the object detector
+ self.object_detector = YOLOv3DetectorLearner(backbone=backbone, device=device)
+ self.object_detector.download(path=".", verbose=True)
+ self.object_detector.load("yolo_default")
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_detection_2d_yolov3_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Object detection 2D YOLOV3 node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.1, keep_size=False)
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_bounding_box_list(boxes) # Convert to ROS bounding_box_list
+ if self.object_publisher is not None:
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to \"darknet53\"",
+ type=str, default="darknet53", choices=["darknet53"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_yolov3_node = ObjectDetectionYOLONode(device=device, backbone=args.backbone,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+ object_detection_yolov3_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov5_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov5_node.py
new file mode 100644
index 0000000000..55918c5649
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_2d_yolov5_node.py
@@ -0,0 +1,139 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rospy
+from vision_msgs.msg import Detection2DArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import YOLOv5DetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionYOLONode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated", detections_topic="/opendr/objects",
+ device="cuda", model_name="yolov5s"):
+ """
+ Creates a ROS Node for object detection with YOLOV5.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model_name: network architecture name
+ :type model_name: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the object detector
+ self.object_detector = YOLOv5DetectorLearner(model_name=model_name, device=device)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_detection_yolov5_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Object detection YOLOV5 node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image)
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_bounding_box_list(boxes) # Convert to ROS bounding_box_list
+ if self.object_publisher is not None:
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes, line_thickness=3)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model_name", help="Network architecture, defaults to \"yolov5s\"",
+ type=str, default="yolov5s", choices=['yolov5s', 'yolov5n', 'yolov5m', 'yolov5l', 'yolov5x',
+ 'yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'custom'])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_yolov5_node = ObjectDetectionYOLONode(device=device, model_name=args.model_name,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+ object_detection_yolov5_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_3d_voxel.py b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_3d_voxel_node.py
old mode 100644
new mode 100755
similarity index 51%
rename from projects/opendr_ws/src/perception/scripts/object_detection_3d_voxel.py
rename to projects/opendr_ws/src/opendr_perception/scripts/object_detection_3d_voxel_node.py
index 6d6b74015a..3c43514906
--- a/projects/opendr_ws/src/perception/scripts/object_detection_3d_voxel.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_detection_3d_voxel_node.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import argparse
import torch
import os
import rospy
@@ -26,11 +27,11 @@ class ObjectDetection3DVoxelNode:
def __init__(
self,
input_point_cloud_topic="/opendr/dataset_point_cloud",
- output_detection3d_topic="/opendr/detection3d",
+ detections_topic="/opendr/objects3d",
device="cuda:0",
model_name="tanet_car_xyres_16",
model_config_path=os.path.join(
- "..", "..", "src", "opendr", "perception", "object_detection_3d",
+ "$OPENDR_HOME", "src", "opendr", "perception", "object_detection_3d",
"voxel_object_detection_3d", "second_detector", "configs", "tanet",
"ped_cycle", "test_short.proto"
),
@@ -39,9 +40,9 @@ def __init__(
"""
Creates a ROS Node for 3D object detection
:param input_point_cloud_topic: Topic from which we are reading the input point cloud
- :type input_image_topic: str
- :param output_detection3d_topic: Topic to which we are publishing the annotations
- :type output_detection3d_topic: str
+ :type input_point_cloud_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations
+ :type detections_topic: str
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
:param model_name: the pretrained model to download or a trained model in temp_dir
@@ -58,15 +59,13 @@ def __init__(
self.learner.load(os.path.join(temp_dir, model_name), verbose=True)
- # Initialize OpenDR ROSBridge object
+ self.input_point_cloud_topic = input_point_cloud_topic
self.bridge = ROSBridge()
self.detection_publisher = rospy.Publisher(
- output_detection3d_topic, Detection3DArray, queue_size=10
+ detections_topic, Detection3DArray, queue_size=1
)
- rospy.Subscriber(input_point_cloud_topic, ROS_PointCloud, self.callback)
-
def callback(self, data):
"""
Callback that process the input data and publishes to the corresponding topics
@@ -80,39 +79,67 @@ def callback(self, data):
# Convert detected boxes to ROS type and publish
ros_boxes = self.bridge.to_ros_boxes_3d(detection_boxes, classes=["Car", "Van", "Truck", "Pedestrian", "Cyclist"])
- if self.detection_publisher is not None:
- self.detection_publisher.publish(ros_boxes)
- rospy.loginfo("Published detection boxes")
-
-if __name__ == "__main__":
- # Automatically run on GPU/CPU
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
- # initialize ROS node
- rospy.init_node("opendr_voxel_detection_3d", anonymous=True)
- rospy.loginfo("Voxel Detection 3D node started")
-
- model_name = rospy.get_param("~model_name", "tanet_car_xyres_16")
- model_config_path = rospy.get_param(
- "~model_config_path", os.path.join(
- "..", "..", "src", "opendr", "perception", "object_detection_3d",
+ self.detection_publisher.publish(ros_boxes)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_detection_3d_voxel_node', anonymous=True)
+ rospy.Subscriber(self.input_point_cloud_topic, ROS_PointCloud, self.callback, queue_size=1, buff_size=10000000)
+
+ rospy.loginfo("Object Detection 3D Voxel Node started.")
+ rospy.spin()
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_point_cloud_topic",
+ help="Point Cloud topic provided by either a point_cloud_dataset_node or any other 3D Point Cloud Node",
+ type=str, default="/opendr/dataset_point_cloud")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=str, default="/opendr/objects3d")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-n", "--model_name", help="Name of the trained model",
+ type=str, default="tanet_car_xyres_16", choices=["tanet_car_xyres_16"])
+ parser.add_argument(
+ "-c", "--model_config_path", help="Path to a model .proto config",
+ type=str, default=os.path.join(
+ "$OPENDR_HOME", "src", "opendr", "perception", "object_detection_3d",
"voxel_object_detection_3d", "second_detector", "configs", "tanet",
- "car", "test_short.proto"
+ "car", "xyres_16.proto"
)
)
- temp_dir = rospy.get_param("~temp_dir", "temp")
- input_point_cloud_topic = rospy.get_param(
- "~input_point_cloud_topic", "/opendr/dataset_point_cloud"
- )
- rospy.loginfo("Using model_name: {}".format(model_name))
+ parser.add_argument("-t", "--temp_dir", help="Path to a temporary directory with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
- # created node object
voxel_node = ObjectDetection3DVoxelNode(
device=device,
- model_name=model_name,
- model_config_path=model_config_path,
- input_point_cloud_topic=input_point_cloud_topic,
- temp_dir=temp_dir,
+ model_name=args.model_name,
+ model_config_path=args.model_config_path,
+ input_point_cloud_topic=args.input_point_cloud_topic,
+ temp_dir=args.temp_dir,
+ detections_topic=args.detections_topic,
)
- # begin ROS communications
- rospy.spin()
+
+ voxel_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py
old mode 100644
new mode 100755
similarity index 50%
rename from projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py
rename to projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py
index 70d66c69a8..8844e336a4
--- a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_deep_sort.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_deep_sort_node.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,16 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import argparse
import cv2
import torch
import os
-from opendr.engine.target import TrackingAnnotation
+from opendr.engine.target import TrackingAnnotationList
import rospy
from vision_msgs.msg import Detection2DArray
from std_msgs.msg import Int32MultiArray
from sensor_msgs.msg import Image as ROS_Image
from opendr_bridge import ROSBridge
-from opendr.engine.learners import Learner
from opendr.perception.object_tracking_2d import (
ObjectTracking2DDeepSortLearner,
ObjectTracking2DFairMotLearner
@@ -33,11 +33,11 @@
class ObjectTracking2DDeepSortNode:
def __init__(
self,
- detector: Learner,
- input_image_topic="/usb_cam/image_raw",
- output_detection_topic="/opendr/detection",
- output_tracking_id_topic="/opendr/tracking_id",
- output_image_topic="/opendr/image_annotated",
+ detector=None,
+ input_rgb_image_topic="/usb_cam/image_raw",
+ output_detection_topic="/opendr/objects",
+ output_tracking_id_topic="/opendr/objects_tracking_id",
+ output_rgb_image_topic="/opendr/image_objects_annotated",
device="cuda:0",
model_name="deep_sort",
temp_dir="temp",
@@ -46,11 +46,11 @@ def __init__(
Creates a ROS Node for 2D object tracking
:param detector: Learner to generate object detections
:type detector: Learner
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
annotated image)
- :type output_image_topic: str
+ :type output_rgb_image_topic: str
:param output_detection_topic: Topic to which we are publishing the detections
:type output_detection_topic: str
:param output_tracking_id_topic: Topic to which we are publishing the tracking ids
@@ -63,7 +63,6 @@ def __init__(
:type temp_dir: str
"""
- # # Initialize the face detector
self.detector = detector
self.learner = ObjectTracking2DDeepSortLearner(
device=device, temp_path=temp_dir,
@@ -73,22 +72,23 @@ def __init__(
self.learner.load(os.path.join(temp_dir, model_name), verbose=True)
- # Initialize OpenDR ROSBridge object
self.bridge = ROSBridge()
- self.tracking_id_publisher = rospy.Publisher(
- output_tracking_id_topic, Int32MultiArray, queue_size=10
- )
+ self.input_rgb_image_topic = input_rgb_image_topic
- if output_image_topic is not None:
- self.output_image_publisher = rospy.Publisher(
- output_image_topic, ROS_Image, queue_size=10
+ if output_tracking_id_topic is not None:
+ self.tracking_id_publisher = rospy.Publisher(
+ output_tracking_id_topic, Int32MultiArray, queue_size=10
)
- self.detection_publisher = rospy.Publisher(
- output_detection_topic, Detection2DArray, queue_size=10
- )
+ if output_rgb_image_topic is not None:
+ self.output_image_publisher = rospy.Publisher(
+ output_rgb_image_topic, ROS_Image, queue_size=10
+ )
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
+ if output_detection_topic is not None:
+ self.detection_publisher = rospy.Publisher(
+ output_detection_topic, Detection2DArray, queue_size=10
+ )
def callback(self, data):
"""
@@ -101,8 +101,7 @@ def callback(self, data):
image = self.bridge.from_ros_image(data, encoding="bgr8")
detection_boxes = self.detector.infer(image)
image_with_detections = ImageWithDetections(image.numpy(), detection_boxes)
- print(image_with_detections.data.shape)
- tracking_boxes = self.learner.infer(image_with_detections)
+ tracking_boxes = self.learner.infer(image_with_detections, swap_left_top=True)
if self.output_image_publisher is not None:
frame = image.opencv()
@@ -111,22 +110,26 @@ def callback(self, data):
Image(frame), encoding="bgr8"
)
self.output_image_publisher.publish(message)
- rospy.loginfo("Published annotated image")
-
- ids = [tracking_box.id for tracking_box in tracking_boxes]
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes(detection_boxes)
if self.detection_publisher is not None:
+ ros_boxes = self.bridge.to_ros_boxes(detection_boxes)
self.detection_publisher.publish(ros_boxes)
- rospy.loginfo("Published detection boxes")
-
- ros_ids = Int32MultiArray()
- ros_ids.data = ids
if self.tracking_id_publisher is not None:
+ ids = [tracking_box.id for tracking_box in tracking_boxes]
+ ros_ids = Int32MultiArray()
+ ros_ids.data = ids
self.tracking_id_publisher.publish(ros_ids)
- rospy.loginfo("Published tracking ids")
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_tracking_2d_deep_sort_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+
+ rospy.loginfo("Object Tracking 2D Deep Sort Node started.")
+ rospy.spin()
colors = [
@@ -139,7 +142,7 @@ def callback(self, data):
]
-def draw_predictions(frame, predictions: TrackingAnnotation, is_centered=False, is_flipped_xy=True):
+def draw_predictions(frame, predictions: TrackingAnnotationList, is_centered=False, is_flipped_xy=True):
global colors
w, h, _ = frame.shape
@@ -174,36 +177,65 @@ def draw_predictions(frame, predictions: TrackingAnnotation, is_centered=False,
)
-if __name__ == "__main__":
- # Automatically run on GPU/CPU
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
- # initialize ROS node
- rospy.init_node("opendr_deep_sort", anonymous=True)
- rospy.loginfo("Deep Sort node started")
-
- model_name = rospy.get_param("~model_name", "deep_sort")
- temp_dir = rospy.get_param("~temp_dir", "temp")
- input_image_topic = rospy.get_param(
- "~input_image_topic", "/opendr/dataset_image"
- )
- rospy.loginfo("Using model_name: {}".format(model_name))
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic",
+ help="Input Image topic provided by either an image_dataset_node, webcam or any other image node",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic",
+ help="Output annotated image topic with a visualization of detections and their ids",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("-t", "--tracking_id_topic",
+ help="Output tracking ids topic with the same element count as in output_detection_topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects_tracking_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-n", "--model_name", help="Name of the trained model",
+ type=str, default="deep_sort", choices=["deep_sort"])
+ parser.add_argument("-td", "--temp_dir", help="Path to a temporary directory with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
detection_learner = ObjectTracking2DFairMotLearner(
- device=device, temp_path=temp_dir,
+ device=device, temp_path=args.temp_dir,
)
- if not os.path.exists(os.path.join(temp_dir, "fairmot_dla34")):
- ObjectTracking2DFairMotLearner.download("fairmot_dla34", temp_dir)
+ if not os.path.exists(os.path.join(args.temp_dir, "fairmot_dla34")):
+ ObjectTracking2DFairMotLearner.download("fairmot_dla34", args.temp_dir)
- detection_learner.load(os.path.join(temp_dir, "fairmot_dla34"), verbose=True)
+ detection_learner.load(os.path.join(args.temp_dir, "fairmot_dla34"), verbose=True)
- # created node object
deep_sort_node = ObjectTracking2DDeepSortNode(
detector=detection_learner,
device=device,
- model_name=model_name,
- input_image_topic=input_image_topic,
- temp_dir=temp_dir,
+ model_name=args.model_name,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ temp_dir=args.temp_dir,
+ output_detection_topic=args.detections_topic,
+ output_tracking_id_topic=args.tracking_id_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
)
- # begin ROS communications
- rospy.spin()
+
+ deep_sort_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_fair_mot_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_fair_mot_node.py
new file mode 100755
index 0000000000..6fe2f81f46
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_fair_mot_node.py
@@ -0,0 +1,226 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import cv2
+import torch
+import os
+from opendr.engine.target import TrackingAnnotationList
+import rospy
+from vision_msgs.msg import Detection2DArray
+from std_msgs.msg import Int32MultiArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+from opendr.perception.object_tracking_2d import (
+ ObjectTracking2DFairMotLearner,
+)
+from opendr.engine.data import Image
+
+
+class ObjectTracking2DFairMotNode:
+ def __init__(
+ self,
+ input_rgb_image_topic="/usb_cam/image_raw",
+ output_detection_topic="/opendr/objects",
+ output_tracking_id_topic="/opendr/objects_tracking_id",
+ output_rgb_image_topic="/opendr/image_objects_annotated",
+ device="cuda:0",
+ model_name="fairmot_dla34",
+ temp_dir="temp",
+ ):
+ """
+ Creates a ROS Node for 2D object tracking
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
+ annotated image)
+ :type output_rgb_image_topic: str
+ :param output_detection_topic: Topic to which we are publishing the detections
+ :type output_detection_topic: str
+ :param output_tracking_id_topic: Topic to which we are publishing the tracking ids
+ :type output_tracking_id_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model_name: the pretrained model to download or a saved model in temp_dir folder to use
+ :type model_name: str
+ :param temp_dir: the folder to download models
+ :type temp_dir: str
+ """
+
+ self.learner = ObjectTracking2DFairMotLearner(
+ device=device, temp_path=temp_dir,
+ )
+ if not os.path.exists(os.path.join(temp_dir, model_name)):
+ ObjectTracking2DFairMotLearner.download(model_name, temp_dir)
+
+ self.learner.load(os.path.join(temp_dir, model_name), verbose=True)
+
+ self.bridge = ROSBridge()
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_detection_topic is not None:
+ self.detection_publisher = rospy.Publisher(
+ output_detection_topic, Detection2DArray, queue_size=10
+ )
+
+ if output_tracking_id_topic is not None:
+ self.tracking_id_publisher = rospy.Publisher(
+ output_tracking_id_topic, Int32MultiArray, queue_size=10
+ )
+
+ if output_rgb_image_topic is not None:
+ self.output_image_publisher = rospy.Publisher(
+ output_rgb_image_topic, ROS_Image, queue_size=10
+ )
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding="bgr8")
+ tracking_boxes = self.learner.infer(image)
+
+ if self.output_image_publisher is not None:
+ frame = image.opencv()
+ draw_predictions(frame, tracking_boxes)
+ message = self.bridge.to_ros_image(
+ Image(frame), encoding="bgr8"
+ )
+ self.output_image_publisher.publish(message)
+
+ if self.detection_publisher is not None:
+ detection_boxes = tracking_boxes.bounding_box_list()
+ ros_boxes = self.bridge.to_ros_boxes(detection_boxes)
+ self.detection_publisher.publish(ros_boxes)
+
+ if self.tracking_id_publisher is not None:
+ ids = [tracking_box.id for tracking_box in tracking_boxes]
+ ros_ids = Int32MultiArray()
+ ros_ids.data = ids
+ self.tracking_id_publisher.publish(ros_ids)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_tracking_2d_fair_mot_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+
+ rospy.loginfo("Object Tracking 2D Fair Mot Node started.")
+ rospy.spin()
+
+
+colors = [
+ (255, 0, 255),
+ (0, 0, 255),
+ (0, 255, 0),
+ (255, 0, 0),
+ (35, 69, 55),
+ (43, 63, 54),
+]
+
+
+def draw_predictions(frame, predictions: TrackingAnnotationList, is_centered=False, is_flipped_xy=True):
+ global colors
+ w, h, _ = frame.shape
+
+ for prediction in predictions.boxes:
+ prediction = prediction
+
+ if not hasattr(prediction, "id"):
+ prediction.id = 0
+
+ color = colors[int(prediction.id) * 7 % len(colors)]
+
+ x = prediction.left
+ y = prediction.top
+
+ if is_flipped_xy:
+ x = prediction.top
+ y = prediction.left
+
+ if is_centered:
+ x -= prediction.width
+ y -= prediction.height
+
+ cv2.rectangle(
+ frame,
+ (int(x), int(y)),
+ (
+ int(x + prediction.width),
+ int(y + prediction.height),
+ ),
+ color,
+ 2,
+ )
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic",
+ help="Input Image topic provided by either an image_dataset_node, webcam or any other image node",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic",
+ help="Output annotated image topic with a visualization of detections and their ids",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("-t", "--tracking_id_topic",
+ help="Output tracking ids topic with the same element count as in output_detection_topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects_tracking_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-n", "--model_name", help="Name of the trained model",
+ type=str, default="fairmot_dla34", choices=["fairmot_dla34"])
+ parser.add_argument("-td", "--temp_dir", help="Path to a temporary directory with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ fair_mot_node = ObjectTracking2DFairMotNode(
+ device=device,
+ model_name=args.model_name,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ temp_dir=args.temp_dir,
+ output_detection_topic=args.detections_topic,
+ output_tracking_id_topic=args.tracking_id_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ )
+
+ fair_mot_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_siamrpn_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_siamrpn_node.py
new file mode 100644
index 0000000000..6dd2a79291
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_2d_siamrpn_node.py
@@ -0,0 +1,173 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import cv2
+from math import dist
+import rospy
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2D
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.engine.target import TrackingAnnotation, BoundingBox
+from opendr.perception.object_tracking_2d import SiamRPNLearner
+from opendr.perception.object_detection_2d import YOLOv3DetectorLearner
+
+
+class ObjectTrackingSiamRPNNode:
+ def __init__(self, object_detector, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_tracking_annotated",
+ tracker_topic="/opendr/tracked_object",
+ device="cuda"):
+ """
+ Creates a ROS Node for object tracking with SiamRPN.
+ :param object_detector: An object detector learner to use for initialization
+ :type object_detector: opendr.engine.learners.Learner
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param tracker_topic: Topic to which we are publishing the annotation
+ :type tracker_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if tracker_topic is not None:
+ self.object_publisher = rospy.Publisher(tracker_topic, Detection2D, queue_size=1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROSBridge()
+
+ self.object_detector = object_detector
+ # Initialize the object detector
+ self.tracker = SiamRPNLearner(device=device)
+ self.image = None
+ self.initialized = False
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_tracking_2d_siamrpn_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.img_callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Object Tracking 2D SiamRPN node started.")
+ rospy.spin()
+
+ def img_callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+ self.image = image
+
+ if not self.initialized:
+ # Run object detector to initialize the tracker
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+ boxes = self.object_detector.infer(image)
+
+ img_center = [int(image.data.shape[2] // 2), int(image.data.shape[1] // 2)] # width, height
+ # Find the box that is closest to the center of the image
+ center_box = BoundingBox("", left=0, top=0, width=0, height=0)
+ min_distance = dist([center_box.left, center_box.top], img_center)
+ for box in boxes:
+ new_distance = dist([int(box.left + box.width // 2), int(box.top + box.height // 2)], img_center)
+ if new_distance < min_distance:
+ center_box = box
+ min_distance = dist([center_box.left, center_box.top], img_center)
+
+ # Initialize tracker with the most central box found
+ init_box = TrackingAnnotation(center_box.name,
+ center_box.left, center_box.top, center_box.width, center_box.height,
+ id=0, score=center_box.confidence)
+
+ self.tracker.infer(self.image, init_box)
+ self.initialized = True
+ rospy.loginfo("Object Tracking 2D SiamRPN node initialized with the most central bounding box.")
+
+ if self.initialized:
+ # Run object tracking
+ box = self.tracker.infer(image)
+
+ if self.object_publisher is not None:
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_single_tracking_annotation(box)
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ cv2.rectangle(image, (box.left, box.top),
+ (box.left + box.width, box.top + box.height),
+ (0, 255, 255), 3)
+ # Convert the annotated OpenDR image to ROS image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_tracking_annotated")
+ parser.add_argument("-t", "--tracker_topic", help="Topic name for tracker messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/tracked_object")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detector = YOLOv3DetectorLearner(backbone="darknet53", device=device)
+ object_detector.download(path=".", verbose=True)
+ object_detector.load("yolo_default")
+
+ object_tracker_2d_siamrpn_node = ObjectTrackingSiamRPNNode(object_detector=object_detector, device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ tracker_topic=args.tracker_topic)
+ object_tracker_2d_siamrpn_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_3d_ab3dmot_node.py b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_3d_ab3dmot_node.py
new file mode 100755
index 0000000000..ae2af44475
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/object_tracking_3d_ab3dmot_node.py
@@ -0,0 +1,174 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import os
+import torch
+import rospy
+from vision_msgs.msg import Detection3DArray
+from std_msgs.msg import Int32MultiArray
+from sensor_msgs.msg import PointCloud as ROS_PointCloud
+from opendr_bridge import ROSBridge
+from opendr.perception.object_tracking_3d import ObjectTracking3DAb3dmotLearner
+from opendr.perception.object_detection_3d import VoxelObjectDetection3DLearner
+
+
+class ObjectTracking3DAb3dmotNode:
+ def __init__(
+ self,
+ detector=None,
+ input_point_cloud_topic="/opendr/dataset_point_cloud",
+ output_detection3d_topic="/opendr/detection3d",
+ output_tracking3d_id_topic="/opendr/tracking3d_id",
+ device="cuda:0",
+ ):
+ """
+ Creates a ROS Node for 3D object tracking
+ :param detector: Learner that provides 3D object detections
+ :type detector: Learner
+ :param input_point_cloud_topic: Topic from which we are reading the input point cloud
+ :type input_point_cloud_topic: str
+ :param output_detection3d_topic: Topic to which we are publishing the annotations
+ :type output_detection3d_topic: str
+ :param output_tracking3d_id_topic: Topic to which we are publishing the tracking ids
+ :type output_tracking3d_id_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+
+ self.detector = detector
+ self.learner = ObjectTracking3DAb3dmotLearner(
+ device=device
+ )
+
+ self.bridge = ROSBridge()
+ self.input_point_cloud_topic = input_point_cloud_topic
+
+ if output_detection3d_topic is not None:
+ self.detection_publisher = rospy.Publisher(
+ output_detection3d_topic, Detection3DArray, queue_size=10
+ )
+
+ if output_tracking3d_id_topic is not None:
+ self.tracking_id_publisher = rospy.Publisher(
+ output_tracking3d_id_topic, Int32MultiArray, queue_size=10
+ )
+
+ rospy.Subscriber(input_point_cloud_topic, ROS_PointCloud, self.callback)
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ point_cloud = self.bridge.from_ros_point_cloud(data)
+ detection_boxes = self.detector.infer(point_cloud)
+ tracking_boxes = self.learner.infer(detection_boxes)
+
+ if self.detection_publisher is not None:
+ # Convert detected boxes to ROS type and publish
+ ros_boxes = self.bridge.to_ros_boxes_3d(detection_boxes, classes=["Car", "Van", "Truck", "Pedestrian", "Cyclist"])
+ self.detection_publisher.publish(ros_boxes)
+
+ if self.tracking_id_publisher is not None:
+ ids = [tracking_box.id for tracking_box in tracking_boxes]
+ ros_ids = Int32MultiArray()
+ ros_ids.data = ids
+ self.tracking_id_publisher.publish(ros_ids)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_object_ab3dmot_tracking_3d_node', anonymous=True)
+ rospy.Subscriber(self.input_point_cloud_topic, ROS_PointCloud, self.callback, queue_size=1, buff_size=10000000)
+
+ rospy.loginfo("Object Tracking 3D Ab3dmot Node started.")
+ rospy.spin()
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_point_cloud_topic",
+ help="Point Cloud topic provided by either a point_cloud_dataset_node or any other 3D Point Cloud Node",
+ type=str, default="/opendr/dataset_point_cloud")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=lambda value: value if value.lower() != "none" else None, default="/opendr/objects3d")
+ parser.add_argument("-t", "--tracking3d_id_topic",
+ help="Output tracking ids topic with the same element count as in output_detection_topic",
+ type=lambda value: value if value.lower() != "none" else None, default="/opendr/objects_tracking_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-dn", "--detector_model_name", help="Name of the trained model",
+ type=str, default="tanet_car_xyres_16", choices=["tanet_car_xyres_16"])
+ parser.add_argument(
+ "-dc", "--detector_model_config_path", help="Path to a model .proto config",
+ type=str, default=os.path.join(
+ "$OPENDR_HOME", "src", "opendr", "perception", "object_detection_3d",
+ "voxel_object_detection_3d", "second_detector", "configs", "tanet",
+ "car", "xyres_16.proto"
+ )
+ )
+ parser.add_argument("-td", "--temp_dir", help="Path to a temporary directory with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ input_point_cloud_topic = args.input_point_cloud_topic
+ detector_model_name = args.detector_model_name
+ temp_dir = args.temp_dir
+ detector_model_config_path = args.detector_model_config_path
+ output_detection3d_topic = args.detections_topic
+ output_tracking3d_id_topic = args.tracking3d_id_topic
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ detector = VoxelObjectDetection3DLearner(
+ device=device,
+ temp_path=temp_dir,
+ model_config_path=detector_model_config_path
+ )
+ if not os.path.exists(os.path.join(temp_dir, detector_model_name)):
+ VoxelObjectDetection3DLearner.download(detector_model_name, temp_dir)
+
+ detector.load(os.path.join(temp_dir, detector_model_name), verbose=True)
+
+ ab3dmot_node = ObjectTracking3DAb3dmotNode(
+ detector=detector,
+ device=device,
+ input_point_cloud_topic=input_point_cloud_topic,
+ output_detection3d_topic=output_detection3d_topic,
+ output_tracking3d_id_topic=output_tracking3d_id_topic,
+ )
+
+ ab3dmot_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/perception/scripts/panoptic_segmentation_efficient_ps.py b/projects/opendr_ws/src/opendr_perception/scripts/panoptic_segmentation_efficient_ps_node.py
similarity index 54%
rename from projects/opendr_ws/src/perception/scripts/panoptic_segmentation_efficient_ps.py
rename to projects/opendr_ws/src/opendr_perception/scripts/panoptic_segmentation_efficient_ps_node.py
index bce86e46ea..04f7024b2b 100755
--- a/projects/opendr_ws/src/perception/scripts/panoptic_segmentation_efficient_ps.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/panoptic_segmentation_efficient_ps_node.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import sys
+from pathlib import Path
import argparse
from typing import Optional
@@ -29,27 +31,31 @@
class EfficientPsNode:
def __init__(self,
+ input_rgb_image_topic: str,
checkpoint: str,
- input_image_topic: str,
output_heatmap_topic: Optional[str] = None,
- output_visualization_topic: Optional[str] = None,
+ output_rgb_visualization_topic: Optional[str] = None,
detailed_visualization: bool = False
):
"""
Initialize the EfficientPS ROS node and create an instance of the respective learner class.
- :param checkpoint: Path to a saved model
+ :param checkpoint: This is either a path to a saved model or one of [cityscapes, kitti] to download
+ pre-trained model weights.
:type checkpoint: str
- :param input_image_topic: ROS topic for the input image stream
- :type input_image_topic: str
+ :param input_rgb_image_topic: ROS topic for the input image stream
+ :type input_rgb_image_topic: str
:param output_heatmap_topic: ROS topic for the predicted semantic and instance maps
:type output_heatmap_topic: str
- :param output_visualization_topic: ROS topic for the generated visualization of the panoptic map
- :type output_visualization_topic: str
+ :param output_rgb_visualization_topic: ROS topic for the generated visualization of the panoptic map
+ :type output_rgb_visualization_topic: str
+ :param detailed_visualization: if True, generate a combined overview of the input RGB image and the
+ semantic, instance, and panoptic segmentation maps and publish it on output_rgb_visualization_topic
+ :type detailed_visualization: bool
"""
+ self.input_rgb_image_topic = input_rgb_image_topic
self.checkpoint = checkpoint
- self.input_image_topic = input_image_topic
self.output_heatmap_topic = output_heatmap_topic
- self.output_visualization_topic = output_visualization_topic
+ self.output_rgb_visualization_topic = output_rgb_visualization_topic
self.detailed_visualization = detailed_visualization
# Initialize all ROS related things
@@ -59,14 +65,27 @@ def __init__(self,
self._visualization_publisher = None
# Initialize the panoptic segmentation network
- self._learner = EfficientPsLearner()
+ config_file = Path(sys.modules[
+ EfficientPsLearner.__module__].__file__).parent / 'configs' / 'singlegpu_cityscapes.py'
+ self._learner = EfficientPsLearner(str(config_file))
+
+ # Other
+ self._tmp_folder = Path(__file__).parent.parent / 'tmp' / 'efficientps'
+ self._tmp_folder.mkdir(exist_ok=True, parents=True)
def _init_learner(self) -> bool:
"""
- Load the weights from the specified checkpoint file.
+ The model can be initialized via
+ 1. downloading pre-trained weights for Cityscapes or KITTI.
+ 2. passing a path to an existing checkpoint file.
This has not been done in the __init__() function since logging is available only once the node is registered.
"""
+ if self.checkpoint in ['cityscapes', 'kitti']:
+ file_path = EfficientPsLearner.download(str(self._tmp_folder),
+ trained_on=self.checkpoint)
+ self.checkpoint = file_path
+
if self._learner.load(self.checkpoint):
rospy.loginfo('Successfully loaded the checkpoint.')
return True
@@ -78,27 +97,28 @@ def _init_subscribers(self):
"""
Subscribe to all relevant topics.
"""
- rospy.Subscriber(self.input_image_topic, ROS_Image, self.callback)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
def _init_publisher(self):
"""
Set up the publishers as requested by the user.
"""
if self.output_heatmap_topic is not None:
- self._instance_heatmap_publisher = rospy.Publisher(f'{self.output_heatmap_topic}/instance', ROS_Image,
- queue_size=10)
- self._semantic_heatmap_publisher = rospy.Publisher(f'{self.output_heatmap_topic}/semantic', ROS_Image,
- queue_size=10)
- if self.output_visualization_topic is not None:
- self._visualization_publisher = rospy.Publisher(self.output_visualization_topic, ROS_Image, queue_size=10)
+ self._instance_heatmap_publisher = rospy.Publisher(
+ f'{self.output_heatmap_topic}/instance', ROS_Image, queue_size=10)
+ self._semantic_heatmap_publisher = rospy.Publisher(
+ f'{self.output_heatmap_topic}/semantic', ROS_Image, queue_size=10)
+ if self.output_rgb_visualization_topic is not None:
+ self._visualization_publisher = rospy.Publisher(self.output_rgb_visualization_topic,
+ ROS_Image, queue_size=10)
def listen(self):
"""
Start the node and begin processing input data. The order of the function calls ensures that the node does not
try to process input images without being in a trained state.
"""
- rospy.init_node('efficient_ps', anonymous=True)
- rospy.loginfo("EfficientPS node started!")
+ rospy.init_node('opendr_efficient_panoptic_segmentation_node', anonymous=True)
+ rospy.loginfo("Panoptic segmentation EfficientPS node started.")
if self._init_learner():
self._init_publisher()
self._init_subscribers()
@@ -121,33 +141,41 @@ def callback(self, data: ROS_Image):
if self._visualization_publisher is not None and self._visualization_publisher.get_num_connections() > 0:
panoptic_image = EfficientPsLearner.visualize(image, prediction, show_figure=False,
detailed=self.detailed_visualization)
- self._visualization_publisher.publish(self._bridge.to_ros_image(panoptic_image))
+ self._visualization_publisher.publish(self._bridge.to_ros_image(panoptic_image, encoding="rgb8"))
if self._instance_heatmap_publisher is not None and self._instance_heatmap_publisher.get_num_connections() > 0:
self._instance_heatmap_publisher.publish(self._bridge.to_ros_image(prediction[0]))
if self._semantic_heatmap_publisher is not None and self._semantic_heatmap_publisher.get_num_connections() > 0:
self._semantic_heatmap_publisher.publish(self._bridge.to_ros_image(prediction[1]))
- except Exception:
- rospy.logwarn('Failed to generate prediction.')
+ except Exception as e:
+ rospy.logwarn(f'Failed to generate prediction: {e}')
if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('checkpoint', type=str, help='load the model weights from the provided path')
- parser.add_argument('image_topic', type=str, help='listen to images on this topic')
- parser.add_argument('--heatmap_topic', type=str, help='publish the semantic and instance maps on this topic')
- parser.add_argument('--visualization_topic', type=str,
+ parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+ parser.add_argument('-i', '--input_rgb_image_topic', type=str, default='/usb_cam/image_raw',
+ help='listen to RGB images on this topic')
+ parser.add_argument('-oh', '--output_heatmap_topic',
+ type=lambda value: value if value.lower() != "none" else None,
+ default='/opendr/panoptic',
+ help='publish the semantic and instance maps on this topic as "OUTPUT_HEATMAP_TOPIC/semantic" \
+ and "OUTPUT_HEATMAP_TOPIC/instance"')
+ parser.add_argument('-ov', '--output_rgb_image_topic',
+ type=lambda value: value if value.lower() != "none" else None,
+ default='/opendr/panoptic/rgb_visualization',
help='publish the panoptic segmentation map as an RGB image on this topic or a more detailed \
overview if using the --detailed_visualization flag')
parser.add_argument('--detailed_visualization', action='store_true',
help='generate a combined overview of the input RGB image and the semantic, instance, and \
- panoptic segmentation maps')
+ panoptic segmentation maps and publish it on OUTPUT_RGB_IMAGE_TOPIC')
+ parser.add_argument('--checkpoint', type=str, default='cityscapes',
+ help='download pretrained models [cityscapes, kitti] or load from the provided path')
args = parser.parse_args()
- efficient_ps_node = EfficientPsNode(args.checkpoint,
- args.image_topic,
- args.heatmap_topic,
- args.visualization_topic,
+ efficient_ps_node = EfficientPsNode(args.input_rgb_image_topic,
+ args.checkpoint,
+ args.output_heatmap_topic,
+ args.output_rgb_image_topic,
args.detailed_visualization)
efficient_ps_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/point_cloud_dataset.py b/projects/opendr_ws/src/opendr_perception/scripts/point_cloud_dataset_node.py
old mode 100644
new mode 100755
similarity index 52%
rename from projects/opendr_ws/src/perception/scripts/point_cloud_dataset.py
rename to projects/opendr_ws/src/opendr_perception/scripts/point_cloud_dataset_node.py
index 0701e1005e..010b90b1d1
--- a/projects/opendr_ws/src/perception/scripts/point_cloud_dataset.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/point_cloud_dataset_node.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import argparse
import os
import rospy
import time
@@ -27,48 +28,57 @@ def __init__(
self,
dataset: DatasetIterator,
output_point_cloud_topic="/opendr/dataset_point_cloud",
+ data_fps=10,
):
"""
Creates a ROS Node for publishing dataset point clouds
"""
- # Initialize the face detector
self.dataset = dataset
- # Initialize OpenDR ROSBridge object
self.bridge = ROSBridge()
+ self.delay = 1.0 / data_fps
- if output_point_cloud_topic is not None:
- self.output_point_cloud_publisher = rospy.Publisher(
- output_point_cloud_topic, ROS_PointCloud, queue_size=10
- )
+ self.output_point_cloud_publisher = rospy.Publisher(
+ output_point_cloud_topic, ROS_PointCloud, queue_size=10
+ )
def start(self):
+ rospy.loginfo("Timing point cloud images")
i = 0
-
while not rospy.is_shutdown():
-
point_cloud = self.dataset[i % len(self.dataset)][0] # Dataset should have a (PointCloud, Target) pair as elements
-
- rospy.loginfo("Publishing point_cloud [" + str(i) + "]")
message = self.bridge.to_ros_point_cloud(
point_cloud
)
self.output_point_cloud_publisher.publish(message)
- time.sleep(0.1)
-
+ time.sleep(self.delay)
i += 1
-if __name__ == "__main__":
-
- rospy.init_node('opendr_point_cloud_dataset')
-
- dataset_path = "KITTI/opendr_nano_kitti"
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-d", "--dataset_path",
+ help="Path to a dataset. If does not exist, nano KITTI dataset will be downloaded there.",
+ type=str, default="KITTI/opendr_nano_kitti")
+ parser.add_argument("-ks", "--kitti_subsets_path",
+ help="Path to kitti subsets. Used only if a KITTI dataset is downloaded",
+ type=str,
+ default="../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets")
+ parser.add_argument("-o", "--output_point_cloud_topic", help="Topic name to publish the data",
+ type=str, default="/opendr/dataset_point_cloud")
+ parser.add_argument("-f", "--fps", help="Data FPS",
+ type=float, default=10)
+ args = parser.parse_args()
+
+ dataset_path = args.dataset_path
+ kitti_subsets_path = args.kitti_subsets_path
+ output_point_cloud_topic = args.output_point_cloud_topic
+ data_fps = args.fps
if not os.path.exists(dataset_path):
dataset_path = KittiDataset.download_nano_kitti(
- "KITTI", kitti_subsets_path="../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets",
+ "KITTI", kitti_subsets_path=kitti_subsets_path,
create_dir=True,
).path
@@ -78,5 +88,16 @@ def start(self):
dataset_path + "/training/calib",
)
- dataset_node = PointCloudDatasetNode(dataset)
+ rospy.init_node('opendr_point_cloud_dataset_node', anonymous=True)
+
+ dataset_node = PointCloudDatasetNode(
+ dataset, output_point_cloud_topic=output_point_cloud_topic, data_fps=data_fps
+ )
+
dataset_node.start()
+ rospy.loginfo("Point cloud dataset node started.")
+ rospy.spin()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/pose_estimation_node.py b/projects/opendr_ws/src/opendr_perception/scripts/pose_estimation_node.py
new file mode 100755
index 0000000000..c07321a3ec
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/pose_estimation_node.py
@@ -0,0 +1,162 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rospy
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge.msg import OpenDRPose2D
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.perception.pose_estimation import draw
+from opendr.perception.pose_estimation import LightweightOpenPoseLearner
+
+
+class PoseEstimationNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_pose_annotated", detections_topic="/opendr/poses", device="cuda",
+ num_refinement_stages=2, use_stride=False, half_precision=False):
+ """
+ Creates a ROS Node for pose estimation with Lightweight OpenPose.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no pose detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param num_refinement_stages: Specifies the number of pose estimation refinement stages are added on the
+ model's head, including the initial stage. Can be 0, 1 or 2, with more stages meaning slower and more accurate
+ inference
+ :type num_refinement_stages: int
+ :param use_stride: Whether to add a stride value in the model, which reduces accuracy but increases
+ inference speed
+ :type use_stride: bool
+ :param half_precision: Enables inference using half (fp16) precision instead of single (fp32) precision.
+ Valid only for GPU-based inference
+ :type half_precision: bool
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.pose_publisher = rospy.Publisher(detections_topic, OpenDRPose2D, queue_size=1)
+ else:
+ self.pose_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the pose estimation learner
+ self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=num_refinement_stages,
+ mobilenet_use_stride=use_stride,
+ half_precision=half_precision)
+ self.pose_estimator.download(path=".", verbose=True)
+ self.pose_estimator.load("openpose_default")
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_pose_estimation_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Pose estimation node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run pose estimation
+ poses = self.pose_estimator.infer(image)
+
+ # Publish detections in ROS message
+ if self.pose_publisher is not None:
+ for pose in poses:
+ # Convert OpenDR pose to ROS pose message using bridge and publish it
+ self.pose_publisher.publish(self.bridge.to_ros_pose(pose))
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with poses
+ for pose in poses:
+ draw(image, pose)
+ # Convert the annotated OpenDR image to ROS image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_pose_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/poses")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--accelerate", help="Enables acceleration flags (e.g., stride)", default=False,
+ action="store_true")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ if args.accelerate:
+ stride = True
+ stages = 0
+ half_prec = True
+ else:
+ stride = False
+ stages = 2
+ half_prec = False
+
+ pose_estimator_node = PoseEstimationNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ num_refinement_stages=stages, use_stride=stride, half_precision=half_prec)
+ pose_estimator_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/rgbd_hand_gesture_recognition_node.py b/projects/opendr_ws/src/opendr_perception/scripts/rgbd_hand_gesture_recognition_node.py
new file mode 100755
index 0000000000..098e297a18
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/rgbd_hand_gesture_recognition_node.py
@@ -0,0 +1,167 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import os
+import cv2
+import numpy as np
+import torch
+
+import rospy
+import message_filters
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Classification2D
+
+from opendr.engine.data import Image
+from opendr.perception.multimodal_human_centric import RgbdHandGestureLearner
+from opendr_bridge import ROSBridge
+
+
+class RgbdHandGestureNode:
+
+ def __init__(self, input_rgb_image_topic="/kinect2/qhd/image_color_rect",
+ input_depth_image_topic="/kinect2/qhd/image_depth_rect",
+ output_gestures_topic="/opendr/gestures", device="cuda", delay=0.1):
+ """
+ Creates a ROS Node for gesture recognition from RGBD. Assuming that the following drivers have been installed:
+ https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param input_depth_image_topic: Topic from which we are reading the input depth image
+ :type input_depth_image_topic: str
+ :param output_gestures_topic: Topic to which we are publishing the predicted gesture class
+ :type output_gestures_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param delay: Define the delay (in seconds) with which rgb message and depth message can be synchronized
+ :type delay: float
+ """
+
+ self.input_rgb_image_topic = input_rgb_image_topic
+ self.input_depth_image_topic = input_depth_image_topic
+ self.delay = delay
+
+ self.gesture_publisher = rospy.Publisher(output_gestures_topic, Classification2D, queue_size=10)
+
+ self.bridge = ROSBridge()
+
+ # Initialize the gesture recognition
+ self.gesture_learner = RgbdHandGestureLearner(n_class=16, architecture="mobilenet_v2", device=device)
+ model_path = './mobilenet_v2'
+ if not os.path.exists(model_path):
+ self.gesture_learner.download(path=model_path)
+ self.gesture_learner.load(path=model_path)
+
+ # mean and std for preprocessing, based on HANDS dataset
+ self.mean = np.asarray([0.485, 0.456, 0.406, 0.0303]).reshape(1, 1, 4)
+ self.std = np.asarray([0.229, 0.224, 0.225, 0.0353]).reshape(1, 1, 4)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data
+ """
+ rospy.init_node('opendr_rgbd_hand_gesture_recognition_node', anonymous=True)
+
+ image_sub = message_filters.Subscriber(self.input_rgb_image_topic, ROS_Image, queue_size=1, buff_size=10000000)
+ depth_sub = message_filters.Subscriber(self.input_depth_image_topic, ROS_Image, queue_size=1, buff_size=10000000)
+ # synchronize image and depth data topics
+ ts = message_filters.ApproximateTimeSynchronizer([image_sub, depth_sub], queue_size=10, slop=self.delay,
+ allow_headerless=True)
+ ts.registerCallback(self.callback)
+
+ rospy.loginfo("RGBD hand gesture recognition node started.")
+ rospy.spin()
+
+ def callback(self, rgb_data, depth_data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param rgb_data: input image message
+ :type rgb_data: sensor_msgs.msg.Image
+ :param depth_data: input depth image message
+ :type depth_data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image and preprocess
+ rgb_image = self.bridge.from_ros_image(rgb_data, encoding='bgr8')
+ depth_data.encoding = 'mono16'
+ depth_image = self.bridge.from_ros_image_to_depth(depth_data, encoding='mono16')
+ img = self.preprocess(rgb_image, depth_image)
+
+ # Run gesture recognition
+ gesture_class = self.gesture_learner.infer(img)
+
+ # Publish results
+ ros_gesture = self.bridge.from_category_to_rosclass(gesture_class)
+ self.gesture_publisher.publish(ros_gesture)
+
+ def preprocess(self, rgb_image, depth_image):
+ """
+ Preprocess rgb_image, depth_image and concatenate them
+ :param rgb_image: input RGB image
+ :type rgb_image: engine.data.Image
+ :param depth_image: input depth image
+ :type depth_image: engine.data.Image
+ """
+ rgb_image = rgb_image.convert(format='channels_last') / (2**8 - 1)
+ depth_image = depth_image.convert(format='channels_last') / (2**16 - 1)
+
+ # resize the images to 224x224
+ rgb_image = cv2.resize(rgb_image, (224, 224))
+ depth_image = cv2.resize(depth_image, (224, 224))
+
+ # concatenate and standardize
+ img = np.concatenate([rgb_image, np.expand_dims(depth_image, axis=-1)], axis=-1)
+ img = (img - self.mean) / self.std
+ img = Image(img, dtype=np.float32)
+ return img
+
+
+if __name__ == '__main__':
+ # default topics are according to kinectv2 drivers at https://github.com/OpenKinect/libfreenect2
+ # and https://github.com/code-iai-iai_kinect2
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-ic", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/kinect2/qhd/image_color_rect")
+ parser.add_argument("-id", "--input_depth_image_topic", help="Topic name for input depth image",
+ type=str, default="/kinect2/qhd/image_depth_rect")
+ parser.add_argument("-o", "--output_gestures_topic", help="Topic name for predicted gesture class",
+ type=str, default="/opendr/gestures")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda",
+ choices=["cuda", "cpu"])
+ parser.add_argument("--delay", help="The delay (in seconds) with which RGB message and"
+ "depth message can be synchronized", type=float, default=0.1)
+
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
+ except:
+ print("Using CPU")
+ device = "cpu"
+
+ gesture_node = RgbdHandGestureNode(input_rgb_image_topic=args.input_rgb_image_topic,
+ input_depth_image_topic=args.input_depth_image_topic,
+ output_gestures_topic=args.output_gestures_topic, device=device,
+ delay=args.delay)
+
+ gesture_node.listen()
diff --git a/projects/opendr_ws/src/opendr_perception/scripts/semantic_segmentation_bisenet_node.py b/projects/opendr_ws/src/opendr_perception/scripts/semantic_segmentation_bisenet_node.py
new file mode 100755
index 0000000000..0047e8fe2e
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_perception/scripts/semantic_segmentation_bisenet_node.py
@@ -0,0 +1,193 @@
+#!/usr/bin/env python3
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import numpy as np
+import torch
+import cv2
+import colorsys
+
+import rospy
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROSBridge
+
+from opendr.engine.data import Image
+from opendr.engine.target import Heatmap
+from opendr.perception.semantic_segmentation import BisenetLearner
+
+
+class BisenetNode:
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw", output_heatmap_topic="/opendr/heatmap",
+ output_rgb_image_topic="/opendr/heatmap_visualization", device="cuda"):
+ """
+ Creates a ROS Node for semantic segmentation with Bisenet.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_heatmap_topic: Topic to which we are publishing the heatmap in the form of a ROS image containing
+ class ids
+ :type output_heatmap_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the heatmap image blended with the
+ input image and a class legend for visualization purposes
+ :type output_rgb_image_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ self.input_rgb_image_topic = input_rgb_image_topic
+
+ if output_heatmap_topic is not None:
+ self.heatmap_publisher = rospy.Publisher(output_heatmap_topic, ROS_Image, queue_size=1)
+ else:
+ self.heatmap_publisher = None
+
+ if output_rgb_image_topic is not None:
+ self.visualization_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
+ else:
+ self.visualization_publisher = None
+
+ self.bridge = ROSBridge()
+
+ # Initialize the semantic segmentation model
+ self.learner = BisenetLearner(device=device)
+ self.learner.download(path="bisenet_camvid")
+ self.learner.load("bisenet_camvid")
+
+ self.class_names = ["Bicyclist", "Building", "Car", "Column Pole", "Fence", "Pedestrian", "Road", "Sidewalk",
+ "Sign Symbol", "Sky", "Tree", "Unknown"]
+ self.colors = self.get_distinct_colors(len(self.class_names)) # Generate n distinct colors
+
+ def listen(self):
+ """
+ Start the node and begin processing input data.
+ """
+ rospy.init_node('opendr_semantic_segmentation_bisenet_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Semantic segmentation BiSeNet node started.")
+ rospy.spin()
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ try:
+ # Run semantic segmentation to retrieve the OpenDR heatmap
+ heatmap = self.learner.infer(image)
+
+ # Publish heatmap in the form of an image containing class ids
+ if self.heatmap_publisher is not None:
+ heatmap = Heatmap(heatmap.data.astype(np.uint8)) # Convert to uint8
+ self.heatmap_publisher.publish(self.bridge.to_ros_image(heatmap))
+
+ # Publish heatmap color visualization blended with the input image and a class color legend
+ if self.visualization_publisher is not None:
+ heatmap_colors = Image(self.colors[heatmap.numpy()])
+ image = Image(cv2.resize(image.convert("channels_last", "bgr"), (960, 720)))
+ alpha = 0.4 # 1.0 means full input image, 0.0 means full heatmap
+ beta = (1.0 - alpha)
+ image_blended = cv2.addWeighted(image.opencv(), alpha, heatmap_colors.opencv(), beta, 0.0)
+ # Add a legend
+ image_blended = self.add_legend(image_blended, np.unique(heatmap.data))
+
+ self.visualization_publisher.publish(self.bridge.to_ros_image(Image(image_blended),
+ encoding='bgr8'))
+ except Exception as e:
+ print(e)
+ rospy.logwarn('Failed to generate prediction.')
+
+ def add_legend(self, image, unique_class_ints):
+ # Text setup
+ origin_x, origin_y = 5, 5 # Text origin x, y
+ color_rectangle_size = 25
+ font_size = 1.0
+ font_thickness = 2
+ w_max = 0
+ for i in range(len(unique_class_ints)):
+ text = self.class_names[unique_class_ints[i]] # Class name
+ x, y = origin_x, origin_y + i * color_rectangle_size # Text position
+ # Determine class color and convert to regular integers
+ color = (int(self.colors[unique_class_ints[i]][0]),
+ int(self.colors[unique_class_ints[i]][1]),
+ int(self.colors[unique_class_ints[i]][2]))
+ # Get text width and height
+ (w, h), _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, font_size, font_thickness)
+ if w >= w_max:
+ w_max = w
+ # Draw partial background rectangle
+ image = cv2.rectangle(image, (x - origin_x, y),
+ (x + origin_x + color_rectangle_size + w_max,
+ y + color_rectangle_size),
+ (255, 255, 255, 0.5), -1)
+ # Draw color rectangle
+ image = cv2.rectangle(image, (x, y),
+ (x + color_rectangle_size, y + color_rectangle_size), color, -1)
+ # Draw class name text
+ image = cv2.putText(image, text, (x + color_rectangle_size + 2, y + h),
+ cv2.FONT_HERSHEY_SIMPLEX, font_size, (0, 0, 0), font_thickness)
+ return image
+
+ @staticmethod
+ def hsv_to_rgb(h, s, v):
+ (r, g, b) = colorsys.hsv_to_rgb(h, s, v)
+ return np.array([int(255 * r), int(255 * g), int(255 * b)])
+
+ def get_distinct_colors(self, n):
+ hue_partition = 1.0 / (n + 1)
+ return np.array([self.hsv_to_rgb(hue_partition * value, 1.0, 1.0) for value in range(0, n)]).astype(np.uint8)
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_heatmap_topic", help="Topic to which we are publishing the heatmap in the form "
+ "of a ROS image containing class ids",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/heatmap")
+ parser.add_argument("-ov", "--output_rgb_image_topic", help="Topic to which we are publishing the heatmap image "
+ "blended with the input image and a class legend for "
+ "visualization purposes",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/heatmap_visualization")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ bisenet_node = BisenetNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_heatmap_topic=args.output_heatmap_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic)
+ bisenet_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws/src/perception/scripts/skeleton_based_action_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/skeleton_based_action_recognition_node.py
old mode 100644
new mode 100755
similarity index 64%
rename from projects/opendr_ws/src/perception/scripts/skeleton_based_action_recognition.py
rename to projects/opendr_ws/src/opendr_perception/scripts/skeleton_based_action_recognition_node.py
index 0556acfd52..0bb74a0e8e
--- a/projects/opendr_ws/src/perception/scripts/skeleton_based_action_recognition.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/skeleton_based_action_recognition_node.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,13 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
+import argparse
import rospy
import torch
import numpy as np
from std_msgs.msg import String
from vision_msgs.msg import ObjectHypothesis
-from vision_msgs.msg import Detection2DArray
+from opendr_bridge.msg import OpenDRPose2D
from sensor_msgs.msg import Image as ROS_Image
from opendr_bridge import ROSBridge
from opendr.perception.pose_estimation import draw
@@ -31,18 +31,19 @@
class SkeletonActionRecognitionNode:
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_pose_annotated",
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
+ output_rgb_image_topic="/opendr/image_pose_annotated",
pose_annotations_topic="/opendr/poses",
- output_category_topic="/opendr/skeleton_based_action_recognition",
- output_category_description_topic="/opendr/skeleton_based_action_recognition_description",
+ output_category_topic="/opendr/skeleton_recognized_action",
+ output_category_description_topic="/opendr/skeleton_recognized_action_description",
device="cuda", model='stgcn'):
"""
Creates a ROS Node for skeleton-based action recognition
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
annotated image)
- :type output_image_topic: str
+ :type output_rgb_image_topic: str
:param pose_annotations_topic: Topic to which we are publishing the annotations (if None, we are not publishing
annotated pose annotations)
:type pose_annotations_topic: str
@@ -60,34 +61,34 @@ def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/
"""
# Set up ROS topics and bridge
+ self.input_rgb_image_topic = input_rgb_image_topic
+ self.bridge = ROSBridge()
- if output_category_topic is not None:
- self.hypothesis_publisher = rospy.Publisher(output_category_topic, ObjectHypothesis, queue_size=10)
- else:
- self.hypothesis_publisher = None
-
- if output_category_description_topic is not None:
- self.string_publisher = rospy.Publisher(output_category_description_topic, String, queue_size=10)
- else:
- self.string_publisher = None
-
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
+ if output_rgb_image_topic is not None:
+ self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
else:
self.image_publisher = None
if pose_annotations_topic is not None:
- self.pose_publisher = rospy.Publisher(pose_annotations_topic, Detection2DArray, queue_size=10)
+ self.pose_publisher = rospy.Publisher(pose_annotations_topic, OpenDRPose2D, queue_size=1)
else:
self.pose_publisher = None
- self.input_image_topic = input_image_topic
- self.bridge = ROSBridge()
+ if output_category_topic is not None:
+ self.hypothesis_publisher = rospy.Publisher(output_category_topic, ObjectHypothesis, queue_size=1)
+ else:
+ self.hypothesis_publisher = None
+
+ if output_category_description_topic is not None:
+ self.string_publisher = rospy.Publisher(output_category_description_topic, String, queue_size=1)
+ else:
+ self.string_publisher = None
# Initialize the pose estimation
- self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=0,
+ self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=2,
mobilenet_use_stride=False,
- half_precision=False)
+ half_precision=False
+ )
self.pose_estimator.download(path=".", verbose=True)
self.pose_estimator.load("openpose_default")
@@ -111,9 +112,9 @@ def listen(self):
"""
Start the node and begin processing input data
"""
- rospy.init_node('opendr_skeleton_based_action_recognition', anonymous=True)
- rospy.Subscriber(self.input_image_topic, ROS_Image, self.callback)
- rospy.loginfo("Skeleton-based action recognition node started!")
+ rospy.init_node('opendr_skeleton_action_recognition_node', anonymous=True)
+ rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
+ rospy.loginfo("Skeleton-based action recognition node started.")
rospy.spin()
def callback(self, data):
@@ -155,6 +156,7 @@ def callback(self, data):
# Run action recognition
category = self.action_classifier.infer(skeleton_seq)
+ category.confidence = float(category.confidence.max())
if self.hypothesis_publisher is not None:
self.hypothesis_publisher.publish(self.bridge.to_ros_category(category))
@@ -171,7 +173,8 @@ def _select_2_poses(poses):
energy.append(s)
energy = np.array(energy)
index = energy.argsort()[::-1][0:2]
- selected_poses.append(poses[index])
+ for i in range(len(index)):
+ selected_poses.append(poses[index[i]])
return selected_poses
@@ -188,16 +191,49 @@ def _pose2numpy(num_current_frames, poses_list):
if __name__ == '__main__':
- # Select the device for running the
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_pose_annotated")
+ parser.add_argument("-p", "--pose_annotations_topic", help="Topic name for pose annotations",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/poses")
+ parser.add_argument("-c", "--output_category_topic", help="Topic name for recognized action category",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/skeleton_recognized_action")
+ parser.add_argument("-d", "--output_category_description_topic",
+ help="Topic name for description of the recognized action category",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/skeleton_recognized_action_description")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model to use, either \"stgcn\" or \"pstgcn\"",
+ type=str, default="stgcn", choices=["stgcn", "pstgcn"])
+
+ args = parser.parse_args()
+
try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
print("GPU not found. Using CPU instead.")
- device = 'cpu'
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
except:
- device = 'cpu'
-
- pose_estimation_node = SkeletonActionRecognitionNode(device=device)
- pose_estimation_node.listen()
+ print("Using CPU.")
+ device = "cpu"
+
+ skeleton_action_recognition_node = \
+ SkeletonActionRecognitionNode(input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ pose_annotations_topic=args.pose_annotations_topic,
+ output_category_topic=args.output_category_topic,
+ output_category_description_topic=args.output_category_description_topic,
+ device=device,
+ model=args.model)
+ skeleton_action_recognition_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/speech_command_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/speech_command_recognition_node.py
similarity index 54%
rename from projects/opendr_ws/src/perception/scripts/speech_command_recognition.py
rename to projects/opendr_ws/src/opendr_perception/scripts/speech_command_recognition_node.py
index 4726b478a1..3d6385fd58 100755
--- a/projects/opendr_ws/src/perception/scripts/speech_command_recognition.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/speech_command_recognition_node.py
@@ -28,26 +28,26 @@
class SpeechRecognitionNode:
- def __init__(self, input_topic='/audio/audio', prediction_topic="/opendr/speech_recognition",
- buffer_size=1.5, model='matchboxnet', model_path=None, device='cuda'):
+ def __init__(self, input_audio_topic="/audio/audio", output_speech_command_topic="/opendr/speech_recognition",
+ buffer_size=1.5, model="matchboxnet", model_path=None, device="cuda"):
"""
Creates a ROS Node for speech command recognition
- :param input_topic: Topic from which the audio data is received
- :type input_topic: str
- :param prediction_topic: Topic to which the predictions are published
- :type prediction_topic: str
+ :param input_audio_topic: Topic from which the audio data is received
+ :type input_audio_topic: str
+ :param output_speech_command_topic: Topic to which the predictions are published
+ :type output_speech_command_topic: str
:param buffer_size: Length of the audio buffer in seconds
:type buffer_size: float
:param model: base speech command recognition model: matchboxnet or quad_selfonn
:type model: str
- :param device: device for inference ('cpu' or 'cuda')
+ :param device: device for inference ("cpu" or "cuda")
:type device: str
"""
- self.publisher = rospy.Publisher(prediction_topic, Classification2D, queue_size=10)
+ self.publisher = rospy.Publisher(output_speech_command_topic, Classification2D, queue_size=10)
- rospy.Subscriber(input_topic, AudioData, self.callback)
+ rospy.Subscriber(input_audio_topic, AudioData, self.callback)
self.bridge = ROSBridge()
@@ -59,17 +59,17 @@ def __init__(self, input_topic='/audio/audio', prediction_topic="/opendr/speech_
# Initialize the recognition model
if model == "matchboxnet":
self.learner = MatchboxNetLearner(output_classes_n=20, device=device)
- load_path = './MatchboxNet'
+ load_path = "./MatchboxNet"
elif model == "edgespeechnets":
self.learner = EdgeSpeechNetsLearner(output_classes_n=20, device=device)
assert model_path is not None, "No pretrained EdgeSpeechNets model available for download"
elif model == "quad_selfonn":
self.learner = QuadraticSelfOnnLearner(output_classes_n=20, device=device)
- load_path = './QuadraticSelfOnn'
+ load_path = "./QuadraticSelfOnn"
# Download the recognition model
if model_path is None:
- self.learner.download_pretrained(path='.')
+ self.learner.download_pretrained(path=".")
self.learner.load(load_path)
else:
self.learner.load(model_path)
@@ -78,15 +78,15 @@ def listen(self):
"""
Start the node and begin processing input data
"""
- rospy.init_node('opendr_speech_command_recognition', anonymous=True)
- rospy.loginfo("Speech command recognition node started!")
+ rospy.init_node("opendr_speech_command_recognition_node", anonymous=True)
+ rospy.loginfo("Speech command recognition node started.")
rospy.spin()
def callback(self, msg_data):
"""
Callback that processes the input data and publishes predictions to the output topic
- :param data: incoming message
- :type data: audio_common_msgs.msg.AudioData
+ :param msg_data: incoming message
+ :type msg_data: audio_common_msgs.msg.AudioData
"""
# Accumulate data until the buffer is full
data = np.reshape(np.frombuffer(msg_data.data, dtype=np.int16)/32768.0, (1, -1))
@@ -105,22 +105,36 @@ def callback(self, msg_data):
self.data_buffer = np.zeros((1, 1))
-if __name__ == '__main__':
- # Select the device for running
- try:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- except:
- device = 'cpu'
-
+if __name__ == "__main__":
parser = argparse.ArgumentParser()
- parser.add_argument('input_topic', type=str, help='listen to input data on this topic')
- parser.add_argument('--buffer_size', type=float, default=1.5, help='size of the audio buffer in seconds')
- parser.add_argument('--model', choices=["matchboxnet", "edgespeechnets", "quad_selfonn"], default="matchboxnet",
- help='model to be used for prediction: matchboxnet or quad_selfonn')
- parser.add_argument('--model_path', type=str,
- help='path to the model files, if not given, the pretrained model will be downloaded')
+ parser.add_argument("-i", "--input_audio_topic", type=str, default="audio/audio",
+ help="Listen to input data on this topic")
+ parser.add_argument("-o", "--output_speech_command_topic", type=str, default="/opendr/speech_recognition",
+ help="Topic name for speech command output")
+ parser.add_argument("--device", type=str, default="cuda", choices=["cuda", "cpu"],
+ help="Device to use (cpu, cuda)")
+ parser.add_argument("--buffer_size", type=float, default=1.5, help="Size of the audio buffer in seconds")
+ parser.add_argument("--model", default="matchboxnet", choices=["matchboxnet", "edgespeechnets", "quad_selfonn"],
+ help="Model to be used for prediction: matchboxnet, edgespeechnets or quad_selfonn")
+ parser.add_argument("--model_path", type=str,
+ help="Path to the model files, if not given, the pretrained model will be downloaded")
args = parser.parse_args()
- speech_node = SpeechRecognitionNode(input_topic=args.input_topic, buffer_size=args.buffer_size,
- model=args.model, model_path=args.model_path, device=device)
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
+ except:
+ print("Using CPU")
+ device = "cpu"
+
+ speech_node = SpeechRecognitionNode(input_audio_topic=args.input_audio_topic,
+ output_speech_command_topic=args.output_speech_command_topic,
+ buffer_size=args.buffer_size, model=args.model, model_path=args.model_path,
+ device=device)
speech_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/video_activity_recognition.py b/projects/opendr_ws/src/opendr_perception/scripts/video_activity_recognition_node.py
similarity index 59%
rename from projects/opendr_ws/src/perception/scripts/video_activity_recognition.py
rename to projects/opendr_ws/src/opendr_perception/scripts/video_activity_recognition_node.py
index b79a462e3a..f05169f5ba 100755
--- a/projects/opendr_ws/src/perception/scripts/video_activity_recognition.py
+++ b/projects/opendr_ws/src/opendr_perception/scripts/video_activity_recognition_node.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2020-2022 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,12 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
+import argparse
import rospy
import torch
import torchvision
import cv2
-import numpy as np
from pathlib import Path
from std_msgs.msg import String
from vision_msgs.msg import ObjectHypothesis
@@ -31,20 +30,19 @@
class HumanActivityRecognitionNode:
-
def __init__(
self,
- input_image_topic="/usb_cam/image_raw",
+ input_rgb_image_topic="/usb_cam/image_raw",
output_category_topic="/opendr/human_activity_recognition",
output_category_description_topic="/opendr/human_activity_recognition_description",
device="cuda",
- model='cox3d-m'
+ model="cox3d-m",
):
"""
- Creates a ROS Node for face recognition
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_category_topic: Topic to which we are publishing the recognized face info
+ Creates a ROS Node for video-based human activity recognition.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_category_topic: Topic to which we are publishing the recognized activity
(if None, we are not publishing the info)
:type output_category_topic: str
:param output_category_description_topic: Topic to which we are publishing the ID of the recognized action
@@ -52,12 +50,20 @@ def __init__(
:type output_category_description_topic: str
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
- :param model: architecture to use for human activity recognition.
+ :param model: Architecture to use for human activity recognition.
(Options: 'cox3d-s', 'cox3d-m', 'cox3d-l', 'x3d-xs', 'x3d-s', 'x3d-m', 'x3d-l')
:type model: str
"""
- assert model in {"cox3d-s", "cox3d-m", "cox3d-l", "x3d-xs", "x3d-s", "x3d-m", "x3d-l"}
+ assert model in {
+ "cox3d-s",
+ "cox3d-m",
+ "cox3d-l",
+ "x3d-xs",
+ "x3d-s",
+ "x3d-m",
+ "x3d-l",
+ }
model_name, model_size = model.split("-")
Learner = {"cox3d": CoX3DLearner, "x3d": X3DLearner}[model_name]
@@ -68,7 +74,9 @@ def __init__(
# Set up preprocessing
if model_name == "cox3d":
- self.preprocess = _image_preprocess(image_size=self.learner.model_hparams["image_size"])
+ self.preprocess = _image_preprocess(
+ image_size=self.learner.model_hparams["image_size"]
+ )
else: # == x3d
self.preprocess = _video_preprocess(
image_size=self.learner.model_hparams["image_size"],
@@ -76,23 +84,33 @@ def __init__(
)
# Set up ROS topics and bridge
+ self.input_rgb_image_topic = input_rgb_image_topic
self.hypothesis_publisher = (
- rospy.Publisher(output_category_topic, ObjectHypothesis, queue_size=10) if output_category_topic else None
+ rospy.Publisher(output_category_topic, ObjectHypothesis, queue_size=1)
+ if output_category_topic
+ else None
)
self.string_publisher = (
- rospy.Publisher(output_category_description_topic, String, queue_size=10) if output_category_topic else None
+ rospy.Publisher(output_category_description_topic, String, queue_size=1)
+ if output_category_description_topic
+ else None
)
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
self.bridge = ROSBridge()
def listen(self):
"""
Start the node and begin processing input data
"""
- rospy.init_node('opendr_human_activity_recognition', anonymous=True)
- rospy.loginfo("Human activity recognition node started!")
+ rospy.init_node("opendr_human_activity_recognition_node", anonymous=True)
+ rospy.Subscriber(
+ self.input_rgb_image_topic,
+ ROS_Image,
+ self.callback,
+ queue_size=1,
+ buff_size=10000000,
+ )
+ rospy.loginfo("Human activity recognition node started.")
rospy.spin()
def callback(self, data):
@@ -101,49 +119,43 @@ def callback(self, data):
:param data: input message
:type data: sensor_msgs.msg.Image
"""
- image = self.bridge.from_ros_image(data)
+ image = self.bridge.from_ros_image(data, encoding="rgb8")
if image is None:
return
- x = self.preprocess(image.numpy())
+ x = self.preprocess(image.convert("channels_first", "rgb"))
result = self.learner.infer(x)
assert len(result) == 1
category = result[0]
- category.confidence = float(max(category.confidence.max())) # Confidence for predicted class
+ category.confidence = float(category.confidence.max()) # Confidence for predicted class
category.description = KINETICS400_CLASSES[category.data] # Class name
if self.hypothesis_publisher is not None:
self.hypothesis_publisher.publish(self.bridge.to_ros_category(category))
if self.string_publisher is not None:
- self.string_publisher.publish(self.bridge.to_ros_category_description(category))
+ self.string_publisher.publish(
+ self.bridge.to_ros_category_description(category)
+ )
-def _resize(image, width=None, height=None, inter=cv2.INTER_AREA):
+def _resize(image, size=None, inter=cv2.INTER_AREA):
# initialize the dimensions of the image to be resized and
# grab the image size
dim = None
(h, w) = image.shape[:2]
- # if both the width and height are None, then return the
- # original image
- if width is None and height is None:
- return image
-
- # check to see if the width is None
- if width is None:
- # calculate the ratio of the height and construct the
+ if h > w:
+ # calculate the ratio of the width and construct the
# dimensions
- r = height / float(h)
- dim = (int(w * r), height)
-
- # otherwise, the height is None
+ r = size / float(w)
+ dim = (size, int(h * r))
else:
- # calculate the ratio of the width and construct the
+ # calculate the ratio of the height and construct the
# dimensions
- r = width / float(w)
- dim = (width, int(h * r))
+ r = size / float(h)
+ dim = (int(w * r), size)
# resize the image
resized = cv2.resize(image, dim, interpolation=inter)
@@ -160,11 +172,11 @@ def _image_preprocess(image_size: int):
def wrapped(frame):
nonlocal standardize
frame = frame.transpose((1, 2, 0)) # C, H, W -> H, W, C
- frame = _resize(frame, height=image_size, width=image_size)
+ frame = _resize(frame, size=image_size)
frame = torch.tensor(frame).permute((2, 0, 1)) # H, W, C -> C, H, W
frame = frame / 255.0 # [0, 255] -> [0.0, 1.0]
frame = standardize(frame)
- return Image(frame, dtype=np.float)
+ return Image(frame, dtype=float)
return wrapped
@@ -179,7 +191,7 @@ def _video_preprocess(image_size: int, window_size: int):
def wrapped(frame):
nonlocal frames, standardize
frame = frame.transpose((1, 2, 0)) # C, H, W -> H, W, C
- frame = _resize(frame, height=image_size, width=image_size)
+ frame = _resize(frame, size=image_size)
frame = torch.tensor(frame).permute((2, 0, 1)) # H, W, C -> C, H, W
frame = frame / 255.0 # [0, 255] -> [0.0, 1.0]
frame = standardize(frame)
@@ -194,17 +206,46 @@ def wrapped(frame):
return wrapped
-if __name__ == '__main__':
- # Select the device for running the
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/usb_cam/image_raw")
+ parser.add_argument("-o", "--output_category_topic", help="Topic to which we are publishing the recognized activity",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/human_activity_recognition")
+ parser.add_argument("-od", "--output_category_description_topic",
+ help="Topic to which we are publishing the ID of the recognized action",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/human_activity_recognition_description")
+ parser.add_argument("--device", help='Device to use, either "cpu" or "cuda", defaults to "cuda"',
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Architecture to use for human activity recognition.",
+ type=str, default="cox3d-m",
+ choices=["cox3d-s", "cox3d-m", "cox3d-l", "x3d-xs", "x3d-s", "x3d-m", "x3d-l"])
+ args = parser.parse_args()
+
try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
print("GPU not found. Using CPU instead.")
- device = 'cpu'
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
except:
- device = 'cpu'
-
- human_activity_recognition_node = HumanActivityRecognitionNode(device=device)
+ print("Using CPU.")
+ device = "cpu"
+
+ human_activity_recognition_node = HumanActivityRecognitionNode(
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_category_topic=args.output_category_topic,
+ output_category_description_topic=args.output_category_description_topic,
+ device=device,
+ model=args.model,
+ )
human_activity_recognition_node.listen()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/projects/opendr_ws/src/perception/src/.keep b/projects/opendr_ws/src/opendr_perception/src/.keep
similarity index 100%
rename from projects/opendr_ws/src/perception/src/.keep
rename to projects/opendr_ws/src/opendr_perception/src/.keep
diff --git a/projects/opendr_ws/src/opendr_planning/CMakeLists.txt b/projects/opendr_ws/src/opendr_planning/CMakeLists.txt
new file mode 100644
index 0000000000..f6f9a5900a
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_planning/CMakeLists.txt
@@ -0,0 +1,14 @@
+cmake_minimum_required(VERSION 3.0.2)
+project(opendr_planning)
+
+find_package(catkin REQUIRED COMPONENTS
+ roscpp
+ rospy
+ std_msgs
+)
+
+catkin_package()
+
+include_directories(
+ ${catkin_INCLUDE_DIRS}
+)
diff --git a/projects/opendr_ws/src/ros_bridge/include/ros_bridge/.keep b/projects/opendr_ws/src/opendr_planning/include/opendr_planning/.keep
similarity index 100%
rename from projects/opendr_ws/src/ros_bridge/include/ros_bridge/.keep
rename to projects/opendr_ws/src/opendr_planning/include/opendr_planning/.keep
diff --git a/projects/opendr_ws/src/opendr_planning/package.xml b/projects/opendr_ws/src/opendr_planning/package.xml
new file mode 100644
index 0000000000..c049e29ddb
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_planning/package.xml
@@ -0,0 +1,18 @@
+
+
+ opendr_planning
+ 2.0.0
+ OpenDR's ROS planning package
+ OpenDR Project Coordinator
+ Apache License v2.0
+ opendr.eu
+ catkin
+ rospy
+ std_msgs
+ rospy
+ std_msgs
+ rospy
+ std_msgs
+
+
+
diff --git a/projects/opendr_ws/src/opendr_planning/scripts/end_to_end_planner_node.py b/projects/opendr_ws/src/opendr_planning/scripts/end_to_end_planner_node.py
new file mode 100755
index 0000000000..757280aa16
--- /dev/null
+++ b/projects/opendr_ws/src/opendr_planning/scripts/end_to_end_planner_node.py
@@ -0,0 +1,124 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import rospy
+import numpy as np
+import webots_ros.srv
+from cv_bridge import CvBridge
+from std_msgs.msg import String
+from sensor_msgs.msg import Imu, Image
+from geometry_msgs.msg import PoseStamped, PointStamped
+from opendr.planning.end_to_end_planning import EndToEndPlanningRLLearner
+from opendr.planning.end_to_end_planning.utils.euler_quaternion_transformations import euler_from_quaternion
+from opendr.planning.end_to_end_planning.utils.euler_quaternion_transformations import euler_to_quaternion
+
+
+class EndToEndPlannerNode:
+
+ def __init__(self):
+ """
+ Creates a ROS Node for end-to-end planner
+ """
+ self.node_name = "opendr_end_to_end_planner"
+ self.bridge = CvBridge()
+ self.model_name = ""
+ self.current_pose = PoseStamped()
+ self.target_pose = PoseStamped()
+ self.current_pose.header.frame_id = "map"
+ self.target_pose.header.frame_id = "map"
+ rospy.init_node(self.node_name, anonymous=True)
+ self.r = rospy.Rate(25)
+ rospy.Subscriber("/model_name", String, self.model_name_callback)
+ counter = 0
+ while self.model_name == "":
+ self.r.sleep()
+ counter += 1
+ if counter > 25:
+ break
+ if self.model_name == "":
+ rospy.loginfo("Webots model is not started!")
+ return
+ self.input_depth_image_topic = "/range_finder/range_image"
+ self.position_topic = "/gps/values"
+ self.orientation_topic = "/inertial_unit/quaternion"
+ self.ros_srv_range_sensor_enable = rospy.ServiceProxy(
+ "/range_finder/enable", webots_ros.srv.set_int)
+ self.ros_srv_gps_sensor_enable = rospy.ServiceProxy(
+ "/gps/enable", webots_ros.srv.set_int)
+ self.ros_srv_inertial_unit_enable = rospy.ServiceProxy(
+ "/inertial_unit/enable", webots_ros.srv.set_int)
+ self.end_to_end_planner = EndToEndPlanningRLLearner(env=None)
+
+ try:
+ self.ros_srv_gps_sensor_enable(1)
+ self.ros_srv_inertial_unit_enable(1)
+ self.ros_srv_range_sensor_enable(1)
+ except rospy.ServiceException as exc:
+ print("Service did not process request: " + str(exc))
+ self.ros_pub_current_pose = rospy.Publisher('current_uav_pose', PoseStamped, queue_size=10)
+ self.ros_pub_target_pose = rospy.Publisher('target_uav_pose', PoseStamped, queue_size=10)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data
+ """
+ rospy.Subscriber(self.orientation_topic, Imu, self.imu_callback)
+ rospy.Subscriber(self.position_topic, PointStamped, self.gps_callback)
+ rospy.Subscriber(self.input_depth_image_topic, Image, self.range_callback, queue_size=1)
+ rospy.spin()
+
+ def range_callback(self, data):
+ image_arr = self.bridge.imgmsg_to_cv2(data)
+ self.range_image = ((np.clip(image_arr.reshape((64, 64, 1)), 0, 15) / 15.) * 255).astype(np.uint8)
+ observation = {'depth_cam': np.copy(self.range_image), 'moving_target': np.array([5, 0, 0])}
+ action = self.end_to_end_planner.infer(observation, deterministic=True)[0]
+ self.publish_poses(action)
+
+ def gps_callback(self, data): # for no dynamics
+ self.current_pose.header.stamp = rospy.Time.now()
+ self.current_pose.pose.position.x = -data.point.x
+ self.current_pose.pose.position.y = -data.point.y
+ self.current_pose.pose.position.z = data.point.z
+
+ def imu_callback(self, data): # for no dynamics
+ self.current_orientation = data.orientation
+ self.current_yaw = euler_from_quaternion(data.orientation)["yaw"]
+ self.current_pose.pose.orientation = euler_to_quaternion(0, 0, yaw=self.current_yaw)
+
+ def model_name_callback(self, data):
+ if data.data[:5] == "robot":
+ self.model_name = data.data
+ if data.data[:4] == "quad":
+ self.model_name = data.data
+
+ def publish_poses(self, action):
+ self.ros_pub_current_pose.publish(self.current_pose)
+ forward_step = np.cos(action[0] * 22.5 / 180 * np.pi)
+ side_step = np.sin(action[0] * 22.5 / 180 * np.pi)
+ yaw_step = action[1] * 22.5 / 180 * np.pi
+ self.target_pose.header.stamp = rospy.Time.now()
+ self.target_pose.pose.position.x = self.current_pose.pose.position.x + forward_step * np.cos(
+ self.current_yaw) - side_step * np.sin(self.current_yaw)
+ self.target_pose.pose.position.y = self.current_pose.pose.position.y + forward_step * np.sin(
+ self.current_yaw) + side_step * np.cos(self.current_yaw)
+ self.target_pose.pose.position.z = self.current_pose.pose.position.z
+ self.target_pose.pose.orientation = euler_to_quaternion(0, 0, yaw=self.current_yaw+yaw_step)
+ self.ros_pub_target_pose.publish(self.target_pose)
+
+
+if __name__ == '__main__':
+ end_to_end_planner_node = EndToEndPlannerNode()
+ end_to_end_planner_node.listen()
diff --git a/projects/opendr_ws/src/ros_bridge/msg/.keep b/projects/opendr_ws/src/opendr_planning/src/.keep
similarity index 100%
rename from projects/opendr_ws/src/ros_bridge/msg/.keep
rename to projects/opendr_ws/src/opendr_planning/src/.keep
diff --git a/projects/opendr_ws/src/simulation/CMakeLists.txt b/projects/opendr_ws/src/opendr_simulation/CMakeLists.txt
similarity index 96%
rename from projects/opendr_ws/src/simulation/CMakeLists.txt
rename to projects/opendr_ws/src/opendr_simulation/CMakeLists.txt
index 5b25717dee..403bbf6c0e 100644
--- a/projects/opendr_ws/src/simulation/CMakeLists.txt
+++ b/projects/opendr_ws/src/opendr_simulation/CMakeLists.txt
@@ -1,5 +1,5 @@
cmake_minimum_required(VERSION 3.0.2)
-project(simulation)
+project(opendr_simulation)
find_package(catkin REQUIRED COMPONENTS
roscpp
diff --git a/projects/opendr_ws/src/simulation/README.md b/projects/opendr_ws/src/opendr_simulation/README.md
similarity index 79%
rename from projects/opendr_ws/src/simulation/README.md
rename to projects/opendr_ws/src/opendr_simulation/README.md
index 398eac32e3..3b943e83a7 100644
--- a/projects/opendr_ws/src/simulation/README.md
+++ b/projects/opendr_ws/src/opendr_simulation/README.md
@@ -1,4 +1,4 @@
-# Simulation Package
+# OpenDR Simulation Package
This package contains ROS nodes related to simulation package of OpenDR.
@@ -14,10 +14,10 @@ export PYTHONPATH=$OPENDR_HOME/src:$PYTHONPATH
2. You can start the human model generation service node.
```shell
-rosrun simulation human_model_generation_service.py
+rosrun opendr_simulation human_model_generation_service.py
```
3. An example client node can run to examine the basic utilities of the service.
```shell
-rosrun simulation human_model_generation_client.py
+rosrun opendr_simulation human_model_generation_client.py
```
diff --git a/projects/opendr_ws/src/simulation/package.xml b/projects/opendr_ws/src/opendr_simulation/package.xml
similarity index 93%
rename from projects/opendr_ws/src/simulation/package.xml
rename to projects/opendr_ws/src/opendr_simulation/package.xml
index cd9795529b..00df4fa4e0 100644
--- a/projects/opendr_ws/src/simulation/package.xml
+++ b/projects/opendr_ws/src/opendr_simulation/package.xml
@@ -1,7 +1,7 @@
- simulation
- 1.1.1
+ opendr_simulation
+ 2.0.0OpenDR's ROS nodes for simulation packageOpenDR Project CoordinatorApache License v2.0
diff --git a/projects/opendr_ws/src/simulation/scripts/human_model_generation_client.py b/projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_client.py
similarity index 93%
rename from projects/opendr_ws/src/simulation/scripts/human_model_generation_client.py
rename to projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_client.py
index 1f9470f9c6..246c757432 100644
--- a/projects/opendr_ws/src/simulation/scripts/human_model_generation_client.py
+++ b/projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_client.py
@@ -20,14 +20,14 @@
from cv_bridge import CvBridge
from opendr_bridge import ROSBridge
from std_msgs.msg import Bool
-from simulation.srv import Mesh_vc
+from opendr_simulation.srv import Mesh_vc
from opendr.simulation.human_model_generation.utilities.model_3D import Model_3D
if __name__ == '__main__':
- rgb_img = cv2.imread(os.path.join(os.environ['OPENDR_HOME'], 'projects/simulation/'
+ rgb_img = cv2.imread(os.path.join(os.environ['OPENDR_HOME'], 'projects/python/simulation/'
'human_model_generation/demos/imgs_input/rgb/result_0004.jpg'))
- msk_img = cv2.imread(os.path.join(os.environ['OPENDR_HOME'], 'projects/simulation/'
+ msk_img = cv2.imread(os.path.join(os.environ['OPENDR_HOME'], 'projects/python/simulation/'
'human_model_generation/demos/imgs_input/msk/result_0004.jpg'))
bridge_cv = CvBridge()
bridge_ros = ROSBridge()
@@ -46,6 +46,6 @@
human_model = Model_3D(vertices, triangles, vertex_colors)
human_model.save_obj_mesh('./human_model.obj')
[out_imgs, human_pose_2D] = human_model.get_img_views(rotations=[30, 120], human_pose_3D=pose, plot_kps=True)
- cv2.imwrite('./rendering.png', out_imgs[0].numpy())
+ cv2.imwrite('./rendering.png', out_imgs[0].opencv())
except rospy.ServiceException as e:
print("Service call failed: %s" % e)
diff --git a/projects/opendr_ws/src/simulation/scripts/human_model_generation_service.py b/projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_service.py
similarity index 98%
rename from projects/opendr_ws/src/simulation/scripts/human_model_generation_service.py
rename to projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_service.py
index f869d989b3..0ad5f13643 100644
--- a/projects/opendr_ws/src/simulation/scripts/human_model_generation_service.py
+++ b/projects/opendr_ws/src/opendr_simulation/scripts/human_model_generation_service.py
@@ -19,7 +19,7 @@
import numpy as np
from opendr_bridge import ROSBridge
from opendr.simulation.human_model_generation.pifu_generator_learner import PIFuGeneratorLearner
-from simulation.srv import Mesh_vc
+from opendr_simulation.srv import Mesh_vc
class PifuNode:
diff --git a/projects/opendr_ws/src/simulation/srv/Mesh_vc.srv b/projects/opendr_ws/src/opendr_simulation/srv/Mesh_vc.srv
similarity index 100%
rename from projects/opendr_ws/src/simulation/srv/Mesh_vc.srv
rename to projects/opendr_ws/src/opendr_simulation/srv/Mesh_vc.srv
diff --git a/projects/opendr_ws/src/perception/README.md b/projects/opendr_ws/src/perception/README.md
deleted file mode 100755
index ba0ab81059..0000000000
--- a/projects/opendr_ws/src/perception/README.md
+++ /dev/null
@@ -1,304 +0,0 @@
-# Perception Package
-
-This package contains ROS nodes related to perception package of OpenDR.
-
-## Dataset ROS Nodes
-
-Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can start a dataset node to publish data from the disk, which is useful to test the functionality without the use of a sensor.
-Dataset nodes take a `DatasetIterator` object that shoud returns a `(Data, Target)` pair elements.
-If the type of the `Data` object is correct, the node will transform it into a corresponding ROS message object and publish it to a desired topic.
-
-### Point Cloud Dataset ROS Node
-To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as:
-```shell
-rosrun perception point_cloud_dataset.py
-```
-By default, it downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements.
-
-### Image Dataset ROS Node
-To get an image from a dataset on the disk, you can start a `image_dataset.py` node as:
-```shell
-rosrun perception image_dataset.py
-```
-By default, it downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS topic. You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements.
-
-## Pose Estimation ROS Node
-Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
-
-1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package):
-
-```shell
-rosrun usb_cam usb_cam_node
-```
-
-2. You are then ready to start the pose detection node
-
-```shell
-rosrun perception pose_estimation.py
-```
-
-3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_pose_annotated`) or
- `rostopic echo /opendr/poses`
-
-## Fall Detection ROS Node
-Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
-
-1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package):
-
-```shell
-rosrun usb_cam usb_cam_node
-```
-
-2. You are then ready to start the fall detection node
-
-```shell
-rosrun perception fall_detection.py
-```
-
-3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_fall_annotated`) or
- `rostopic echo /opendr/falls`, where the node publishes bounding boxes of detected fallen poses
-
-## Face Recognition ROS Node
-Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
-
-
-1. Start the node responsible for publishing images. If you have a usb camera, then you can use the corresponding node (assuming you have installed the corresponding package):
-
-```shell
-rosrun usb_cam usb_cam_node
-```
-
-2. You are then ready to start the face recognition node. Note that you should pass the folder containing the images of known faces as argument to create the corresponding database of known persons.
-
-```shell
-rosrun perception face_recognition.py _database_path:='./database'
-```
-**Notes**
-
-Reference images should be placed in a defined structure like:
-- imgs
- - ID1
- - image1
- - image2
- - ID2
- - ID3
- - ...
-
-Τhe name of the sub-folder, e.g. ID1, will be published under `/opendr/face_recognition_id`.
-
-4. The database entry and the returned confidence is published under the topic name `/opendr/face_recognition`, and the human-readable ID
-under `/opendr/face_recognition_id`.
-
-## 2D Object Detection ROS Nodes
-ROS nodes are implemented for the SSD, YOLOv3, CenterNet and DETR generic object detectors. Steps 1, 2 from above must run first.
-Then, to initiate the SSD detector node, run:
-
-```shell
-rosrun perception object_detection_2d_ssd.py
-```
-The annotated image stream can be viewed using `rqt_image_view`, and the default topic name is
-`/opendr/image_boxes_annotated`. The bounding boxes alone are also published as `/opendr/objects`.
-Similarly, the YOLOv3, CenterNet and DETR detector nodes can be run with:
-```shell
-rosrun perception object_detection_2d_yolov3.py
-```
-or
-```shell
-rosrun perception object_detection_2d_centernet.py
-```
-or
-```shell
-rosrun perception object_detection_2d_detr.py
-```
-respectively.
-
-## Face Detection ROS Node
-A ROS node for the RetinaFace detector is implemented, supporting both the ResNet and MobileNet versions, the latter of
-which performs mask recognition as well. After setting up the environment, the detector node can be initiated as:
-```shell
-rosrun perception face_detection_retinaface.py
-```
-The annotated image stream is published under the topic name `/opendr/image_boxes_annotated`, and the bounding boxes alone
-under `/opendr/faces`.
-
-## GEM ROS Node
-Assuming that you have already [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
-
-
-1. Add OpenDR to `PYTHONPATH` (please make sure you do not overwrite `PYTHONPATH` ), e.g.,
-```shell
-export PYTHONPATH="/home/user/opendr/src:$PYTHONPATH"
-```
-2. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera.
-These points can be selected using a [utility tool](../../../../src/opendr/perception/object_detection_2d/utils/get_color_infra_alignment.py) that is provided in the toolkit.
-
-3. Pass the points you have found as *pts_color* and *pts_infra* arguments to the ROS gem.py node.
-
-4. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed [realsense2_camera](http://wiki.ros.org/realsense2_camera)):
-
-```shell
-roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480
-```
-
-4. You are then ready to start the pose detection node
-
-```shell
-rosrun perception object_detection_2d_gem.py
-```
-
-5. You can examine the annotated image stream using `rqt_image_view` (select one of the topics `/opendr/color_detection_annotated` or `/opendr/infra_detection_annotated`) or `rostopic echo /opendr/detections`
-
-
-## Panoptic Segmentation ROS Node
-A ROS node for performing panoptic segmentation on a specified RGB image stream using the [EfficientPS](../../../../src/opendr/perception/panoptic_segmentation/README.md) network.
-Assuming that the OpenDR catkin workspace has been sourced, the node can be started with:
-```shell
-rosrun perception panoptic_segmentation_efficient_ps.py CHECKPOINT IMAGE_TOPIC
-```
-with `CHECKPOINT` pointing to the path to the trained model weights and `IMAGE_TOPIC` specifying the ROS topic, to which the node will subscribe.
-
-Additionally, the following optional arguments are available:
-- `-h, --help`: show a help message and exit
-- `--heamap_topic HEATMAP_TOPIC`: publish the semantic and instance maps on `HEATMAP_TOPIC`
-- `--visualization_topic VISUALIZATION_TOPIC`: publish the panoptic segmentation map as an RGB image on `VISUALIZATION_TOPIC` or a more detailed overview if using the `--detailed_visualization` flag
-- `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps
-
-
-## Semantic Segmentation ROS Node
-A ROS node for performing semantic segmentation on an input image using the BiseNet model.
-Assuming that the OpenDR catkin workspace has been sourced, the node can be started with:
-```shell
-rosrun perception semantic_segmentation_bisenet.py IMAGE_TOPIC
-```
-
-Additionally, the following optional arguments are available:
-- `-h, --help`: show a help message and exit
-- `--heamap_topic HEATMAP_TOPIC`: publish the heatmap on `HEATMAP_TOPIC`
-
-## RGBD Hand Gesture Recognition ROS Node
-
-A ROS node for performing hand gesture recognition using MobileNetv2 model trained on HANDS dataset. The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2. Assuming that the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception rgbd_hand_gesture_recognition.py
-```
-The predictied classes are published to the topic `/opendr/gestures`.
-
-## Heart Anomaly Detection ROS Node
-
-A ROS node for performing heart anomaly (atrial fibrillation) detection from ecg data using GRU or ANBOF models trained on AF dataset. Assuming that the OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception heart_anomaly_detection.py ECG_TOPIC MODEL
-```
-with `ECG_TOPIC` specifying the ROS topic to which the node will subscribe, and `MODEL` set to either *gru* or *anbof*. The predictied classes are published to the topic `/opendr/heartanomaly`.
-
-## Human Action Recognition ROS Node
-
-A ROS node for performing Human Activity Recognition using either CoX3D or X3D models pretrained on Kinetics400.
-Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception video_activity_recognition.py
-```
-The predictied class id and confidence is published under the topic name `/opendr/human_activity_recognition`, and the human-readable class name under `/opendr/human_activity_recognition_description`.
-
-## Landmark-based Facial Expression Recognition ROS Node
-
-A ROS node for performing Landmark-based Facial Expression Recognition using the pretrained model PST-BLN on AFEW, CK+ or Oulu-CASIA datasets.
-Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception landmark_based_facial_expression_recognition.py
-```
-The predictied class id and confidence is published under the topic name `/opendr/landmark_based_expression_recognition`, and the human-readable class name under `/opendr/landmark_based_expression_recognition_description`.
-
-## Skeleton-based Human Action Recognition ROS Node
-
-A ROS node for performing Skeleton-based Human Action Recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset. The human body poses of the image are first extracted by the light-weight Openpose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized.
-Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception skeleton_based_action_recognition.py
-```
-The predictied class id and confidence is published under the topic name `/opendr/skeleton_based_action_recognition`, and the human-readable class name under `/opendr/skeleton_based_action_recognition_description`.
-Besides, the annotated image is published in `/opendr/image_pose_annotated` as well as the corresponding poses in `/opendr/poses`.
-
-## Speech Command Recognition ROS Node
-
-A ROS node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset.
-Assuming that the OpenDR catkin workspace has been sourced, the node can be started with:
-```shell
-rosrun perception speech_command_recognition.py INPUT_AUDIO_TOPIC
-```
-The following optional arguments are available:
-- `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds, default value **1.5**
-- `--model MODEL`: choose the model to use: `matchboxnet` (default value), `edgespeechnets` or `quad_selfonn`
-- `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server
-
-The predictions (class id and confidence) are published to the topic `/opendr/speech_recognition`.
-**Note:** EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used.
-
-## Voxel Object Detection 3D ROS Node
-
-A ROS node for performing Object Detection 3D using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models.
-The predicted detection annotations are pushed to `output_detection3d_topic` (default `output_detection3d_topic="/opendr/detection3d"`).
-
-Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception object_detection_3d_voxel.py
-```
-To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as:
-```shell
-rosrun perception point_cloud_dataset.py
-```
-This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`.
-
-## AB3DMOT Object Tracking 3D ROS Node
-
-A ROS node for performing Object Tracking 3D using AB3DMOT stateless method.
-This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids.
-The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection3d"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking3d_id"`).
-
-Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception object_tracking_3d_ab3dmot.py
-```
-To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as:
-```shell
-rosrun perception point_cloud_dataset.py
-```
-This will pulbish the dataset point clouds to a `/opendr/dataset_point_cloud` topic by default, which means that the `input_point_cloud_topic` should be set to `/opendr/dataset_point_cloud`.
-
-
-## FairMOT Object Tracking 2D ROS Node
-
-A ROS node for performing Object Tracking 2D using FairMOT with either pretrained models on MOT dataset, or custom trained models. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`)
-Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception object_tracking_2d_fair_mot.py
-```
-To get images from usb_camera, you can start the camera node as:
-```shell
-rosrun usb_cam usb_cam_node
-```
-The corresponding `input_image_topic` should be `/usb_cam/image_raw`.
-If you want to use a dataset from the disk, you can start a `image_dataset.py` node as:
-```shell
-rosrun perception image_dataset.py
-```
-This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`.
-
-## Deep Sort Object Tracking 2D ROS Node
-
-A ROS node for performing Object Tracking 2D using Deep Sort using either pretrained models on Market1501 dataset, or custom trained models. This is a detection-based method, and therefore the 2D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids. The predicted tracking annotations are split into two topics with detections (default `output_detection_topic="/opendr/detection"`) and tracking ids (default `output_tracking_id_topic="/opendr/tracking_id"`). Additionally, an annotated image is generated if the `output_image_topic` is not None (default `output_image_topic="/opendr/image_annotated"`)
-Assuming the drivers have been installed and OpenDR catkin workspace has been sourced, the node can be started as:
-```shell
-rosrun perception object_tracking_2d_deep_sort.py
-```
-To get images from usb_camera, you can start the camera node as:
-```shell
-rosrun usb_cam usb_cam_node
-```
-The corresponding `input_image_topic` should be `/usb_cam/image_raw`.
-If you want to use a dataset from the disk, you can start an `image_dataset.py` node as:
-```shell
-rosrun perception image_dataset.py
-```
-This will pulbish the dataset images to an `/opendr/dataset_image` topic by default, which means that the `input_image_topic` should be set to `/opendr/dataset_image`.
-
diff --git a/projects/opendr_ws/src/perception/scripts/face_detection_retinaface.py b/projects/opendr_ws/src/perception/scripts/face_detection_retinaface.py
deleted file mode 100755
index 7227951b17..0000000000
--- a/projects/opendr_ws/src/perception/scripts/face_detection_retinaface.py
+++ /dev/null
@@ -1,127 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import rospy
-import mxnet as mx
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.perception.object_detection_2d import RetinaFaceLearner
-from opendr.perception.object_detection_2d import draw_bounding_boxes
-from opendr.engine.data import Image
-
-
-class FaceDetectionNode:
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_boxes_annotated",
- face_detections_topic="/opendr/faces", device="cuda", backbone="resnet"):
- """
- Creates a ROS Node for face detection
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param face_detections_topic: Topic to which we are publishing the annotations (if None, we are not publishing
- annotated pose annotations)
- :type face_detections_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- :param backbone: retinaface backbone, options are ('mnet' and 'resnet'), where 'mnet' detects masked faces as well
- :type backbone: str
- """
-
- # Initialize the face detector
- self.face_detector = RetinaFaceLearner(backbone=backbone, device=device)
- self.face_detector.download(path=".", verbose=True)
- self.face_detector.load("retinaface_{}".format(backbone))
- self.class_names = ["face", "masked_face"]
-
- # Initialize OpenDR ROSBridge object
- self.bridge = ROSBridge()
-
- # setup communications
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
- else:
- self.image_publisher = None
-
- if face_detections_topic is not None:
- self.face_publisher = rospy.Publisher(face_detections_topic, Detection2DArray, queue_size=10)
- else:
- self.face_publisher = None
-
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding='bgr8')
-
- # Run pose estimation
- boxes = self.face_detector.infer(image)
-
- # Get an OpenCV image back
- image = image.opencv()
-
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes(boxes)
- if self.face_publisher is not None:
- self.face_publisher.publish(ros_boxes)
- rospy.loginfo("Published face boxes")
-
- # Annotate image and publish result
- # NOTE: converting back to OpenDR BoundingBoxList is unnecessary here,
- # only used to test the corresponding bridge methods
- odr_boxes = self.bridge.from_ros_boxes(ros_boxes)
- image = draw_bounding_boxes(image, odr_boxes, class_names=self.class_names)
- if self.image_publisher is not None:
- message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
- self.image_publisher.publish(message)
- rospy.loginfo("Published annotated image")
-
-
-if __name__ == '__main__':
- # Automatically run on GPU/CPU
- try:
- if mx.context.num_gpus() > 0:
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- # initialize ROS node
- rospy.init_node('opendr_face_detection', anonymous=True)
- rospy.loginfo("Face detection node started!")
-
- # get network backbone ("mnet" detects masked faces as well)
- backbone = rospy.get_param("~backbone", "resnet")
- input_image_topic = rospy.get_param("~input_image_topic", "/videofile/image_raw")
-
- rospy.loginfo("Using backbone: {}".format(backbone))
- assert backbone in ["resnet", "mnet"], "backbone should be one of ['resnet', 'mnet']"
-
- # created node object
- face_detection_node = FaceDetectionNode(device=device, backbone=backbone,
- input_image_topic=input_image_topic)
- # begin ROS communications
- rospy.spin()
diff --git a/projects/opendr_ws/src/perception/scripts/face_recognition.py b/projects/opendr_ws/src/perception/scripts/face_recognition.py
deleted file mode 100755
index 9bbe783f33..0000000000
--- a/projects/opendr_ws/src/perception/scripts/face_recognition.py
+++ /dev/null
@@ -1,148 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import rospy
-import torch
-from vision_msgs.msg import ObjectHypothesis
-from std_msgs.msg import String
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-
-from opendr.perception.face_recognition import FaceRecognitionLearner
-from opendr.perception.object_detection_2d import RetinaFaceLearner
-from opendr.perception.object_detection_2d.datasets.transforms import BoundingBoxListToNumpyArray
-
-
-class FaceRecognitionNode:
-
- def __init__(self, input_image_topic="/usb_cam/image_raw",
- face_recognition_topic="/opendr/face_recognition",
- face_id_topic="/opendr/face_recognition_id",
- database_path="./database", device="cuda",
- backbone='mobilefacenet'):
- """
- Creates a ROS Node for face recognition
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param face_recognition_topic: Topic to which we are publishing the recognized face info
- (if None, we are not publishing the info)
- :type face_recognition_topic: str
- :param face_id_topic: Topic to which we are publishing the ID of the recognized person
- (if None, we are not publishing the ID)
- :type face_id_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- """
-
- # Initialize the face recognizer
- self.recognizer = FaceRecognitionLearner(device=device, mode='backbone_only', backbone=backbone)
- self.recognizer.download(path=".")
- self.recognizer.load(".")
- self.recognizer.fit_reference(database_path, save_path=".", create_new=True)
-
- # Initialize the face detector
- self.face_detector = RetinaFaceLearner(backbone='mnet', device=device)
- self.face_detector.download(path=".", verbose=True)
- self.face_detector.load("retinaface_{}".format('mnet'))
- self.class_names = ["face", "masked_face"]
-
- if face_recognition_topic is not None:
- self.face_publisher = rospy.Publisher(face_recognition_topic, ObjectHypothesis, queue_size=10)
- else:
- self.face_publisher = None
-
- if face_id_topic is not None:
- self.face_id_publisher = rospy.Publisher(face_id_topic, String, queue_size=10)
- else:
- self.face_id_publisher = None
-
- self.bridge = ROSBridge()
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data)
- image = image.opencv()
-
- # Run face detection and recognition
- if image is not None:
- bounding_boxes = self.face_detector.infer(image)
- if bounding_boxes:
- bounding_boxes = BoundingBoxListToNumpyArray()(bounding_boxes)
- boxes = bounding_boxes[:, :4]
- for idx, box in enumerate(boxes):
- (startX, startY, endX, endY) = int(box[0]), int(box[1]), int(box[2]), int(box[3])
- img = image[startY:endY, startX:endX]
- result = self.recognizer.infer(img)
-
- if result.data is not None:
- if self.face_publisher is not None:
- ros_face = self.bridge.to_ros_face(result)
- self.face_publisher.publish(ros_face)
-
- if self.face_id_publisher is not None:
- ros_face_id = self.bridge.to_ros_face_id(result)
- self.face_id_publisher.publish(ros_face_id.data)
-
- else:
- result.description = "Unknown"
- if self.face_publisher is not None:
- ros_face = self.bridge.to_ros_face(result)
- self.face_publisher.publish(ros_face)
-
- if self.face_id_publisher is not None:
- ros_face_id = self.bridge.to_ros_face_id(result)
- self.face_id_publisher.publish(ros_face_id.data)
-
- # We get can the data back using self.bridge.from_ros_face(ros_face)
- # e.g.
- # face = self.bridge.from_ros_face(ros_face)
- # face.description = self.recognizer.database[face.id][0]
-
-
-if __name__ == '__main__':
- # Select the device for running the
- try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- # initialize ROS node
- rospy.init_node('opendr_face_recognition', anonymous=True)
- rospy.loginfo("Face recognition node started!")
-
- # get network backbone
- backbone = rospy.get_param("~backbone", "mobilefacenet")
- input_image_topic = rospy.get_param("~input_image_topic", "/usb_cam/image_raw")
- database_path = rospy.get_param('~database_path', './')
- rospy.loginfo("Using backbone: {}".format(backbone))
- assert backbone in ["mobilefacenet", "ir_50"], "backbone should be one of ['mobilefacenet', 'ir_50']"
-
- face_recognition_node = FaceRecognitionNode(device=device, backbone=backbone,
- input_image_topic=input_image_topic,
- database_path=database_path)
- # begin ROS communications
- rospy.spin()
diff --git a/projects/opendr_ws/src/perception/scripts/fall_detection.py b/projects/opendr_ws/src/perception/scripts/fall_detection.py
deleted file mode 100644
index ef456d2ec8..0000000000
--- a/projects/opendr_ws/src/perception/scripts/fall_detection.py
+++ /dev/null
@@ -1,133 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import rospy
-import torch
-import cv2
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.perception.pose_estimation import get_bbox
-from opendr.perception.pose_estimation import LightweightOpenPoseLearner
-from opendr.perception.fall_detection import FallDetectorLearner
-from opendr.engine.data import Image
-from opendr.engine.target import BoundingBox, BoundingBoxList
-
-
-class FallDetectionNode:
-
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_fall_annotated",
- fall_annotations_topic="/opendr/falls", device="cuda"):
- """
- Creates a ROS Node for fall detection
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param fall_annotations_topic: Topic to which we are publishing the annotations (if None, we are not publishing
- annotated fall annotations)
- :type fall_annotations_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- """
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
- else:
- self.image_publisher = None
-
- if fall_annotations_topic is not None:
- self.fall_publisher = rospy.Publisher(fall_annotations_topic, Detection2DArray, queue_size=10)
- else:
- self.fall_publisher = None
-
- self.input_image_topic = input_image_topic
-
- self.bridge = ROSBridge()
-
- # Initialize the pose estimation
- self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=2,
- mobilenet_use_stride=False,
- half_precision=False)
- self.pose_estimator.download(path=".", verbose=True)
- self.pose_estimator.load("openpose_default")
-
- self.fall_detector = FallDetectorLearner(self.pose_estimator)
-
- def listen(self):
- """
- Start the node and begin processing input data
- """
- rospy.init_node('opendr_fall_detection', anonymous=True)
- rospy.Subscriber(self.input_image_topic, ROS_Image, self.callback)
- rospy.loginfo("Fall detection node started!")
- rospy.spin()
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding='bgr8')
-
- # Run fall detection
- detections = self.fall_detector.infer(image)
-
- # Get an OpenCV image back
- image = image.opencv()
-
- bboxes = BoundingBoxList([])
- for detection in detections:
- fallen = detection[0].data
- pose = detection[2]
-
- if fallen == 1:
- color = (0, 0, 255)
- x, y, w, h = get_bbox(pose)
- bbox = BoundingBox(left=x, top=y, width=w, height=h, name=0)
- bboxes.data.append(bbox)
-
- cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
- cv2.putText(image, "Detected fallen person", (5, 55), cv2.FONT_HERSHEY_SIMPLEX,
- 0.75, color, 1, cv2.LINE_AA)
-
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes(bboxes)
- if self.fall_publisher is not None:
- self.fall_publisher.publish(ros_boxes)
-
- if self.image_publisher is not None:
- message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
- self.image_publisher.publish(message)
-
-
-if __name__ == '__main__':
- # Select the device for running the
- try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- fall_detection_node = FallDetectionNode(device=device)
- fall_detection_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/image_dataset.py b/projects/opendr_ws/src/perception/scripts/image_dataset.py
deleted file mode 100644
index 0ce4ee3850..0000000000
--- a/projects/opendr_ws/src/perception/scripts/image_dataset.py
+++ /dev/null
@@ -1,84 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import rospy
-import time
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.engine.datasets import DatasetIterator
-from opendr.perception.object_tracking_2d import MotDataset, RawMotDatasetIterator
-
-
-class ImageDatasetNode:
- def __init__(
- self,
- dataset: DatasetIterator,
- output_image_topic="/opendr/dataset_image",
- ):
- """
- Creates a ROS Node for publishing dataset images
- """
-
- # Initialize the face detector
- self.dataset = dataset
- # Initialize OpenDR ROSBridge object
- self.bridge = ROSBridge()
-
- if output_image_topic is not None:
- self.output_image_publisher = rospy.Publisher(
- output_image_topic, ROS_Image, queue_size=10
- )
-
- def start(self):
- rospy.loginfo("Timing images")
-
- i = 0
-
- while not rospy.is_shutdown():
-
- image = self.dataset[i % len(self.dataset)][0] # Dataset should have an (Image, Target) pair as elements
-
- rospy.loginfo("Publishing image [" + str(i) + "]")
- message = self.bridge.to_ros_image(
- image, encoding="rgb8"
- )
- self.output_image_publisher.publish(message)
-
- time.sleep(0.1)
-
- i += 1
-
-
-if __name__ == "__main__":
-
- rospy.init_node('opendr_image_dataset')
-
- dataset_path = MotDataset.download_nano_mot20(
- "MOT", True
- ).path
-
- dataset = RawMotDatasetIterator(
- dataset_path,
- {
- "mot20": os.path.join(
- "..", "..", "src", "opendr", "perception", "object_tracking_2d",
- "datasets", "splits", "nano_mot20.train"
- )
- },
- scan_labels=False
- )
- dataset_node = ImageDatasetNode(dataset)
- dataset_node.start()
diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_centernet.py b/projects/opendr_ws/src/perception/scripts/object_detection_2d_centernet.py
deleted file mode 100755
index c1615f99a7..0000000000
--- a/projects/opendr_ws/src/perception/scripts/object_detection_2d_centernet.py
+++ /dev/null
@@ -1,122 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import rospy
-import mxnet as mx
-import numpy as np
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr.engine.data import Image
-from opendr_bridge import ROSBridge
-from opendr.perception.object_detection_2d import CenterNetDetectorLearner
-from opendr.perception.object_detection_2d import draw_bounding_boxes
-
-
-class ObjectDetectionCenterNetNode:
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_boxes_annotated",
- detections_topic="/opendr/objects", device="cuda", backbone="resnet50_v1b"):
- """
- Creates a ROS Node for face detection
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param detections_topic: Topic to which we are publishing the annotations (if None, we are not publishing
- annotated pose annotations)
- :type detections_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- :param backbone: backbone network
- :type backbone: str
- """
-
- # Initialize the face detector
- self.object_detector = CenterNetDetectorLearner(backbone=backbone, device=device)
- self.object_detector.download(path=".", verbose=True)
- self.object_detector.load("centernet_default")
- self.class_names = self.object_detector.classes
-
- # Initialize OpenDR ROSBridge object
- self.bridge = ROSBridge()
-
- # setup communications
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
- else:
- self.image_publisher = None
-
- if detections_topic is not None:
- self.bbox_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=10)
- else:
- self.bbox_publisher = None
-
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding='bgr8')
-
- # Run pose estimation
- boxes = self.object_detector.infer(image, threshold=0.45, keep_size=False)
-
- # Get an OpenCV image back
- image = np.float32(image.opencv())
-
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes(boxes)
- if self.bbox_publisher is not None:
- self.bbox_publisher.publish(ros_boxes)
- rospy.loginfo("Published face boxes")
-
- # Annotate image and publish result
- # NOTE: converting back to OpenDR BoundingBoxList is unnecessary here,
- # only used to test the corresponding bridge methods
- odr_boxes = self.bridge.from_ros_boxes(ros_boxes)
- image = draw_bounding_boxes(image, odr_boxes, class_names=self.class_names)
- if self.image_publisher is not None:
- message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
- self.image_publisher.publish(message)
- rospy.loginfo("Published annotated image")
-
-
-if __name__ == '__main__':
- # Automatically run on GPU/CPU
- try:
- if mx.context.num_gpus() > 0:
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- # initialize ROS node
- rospy.init_node('opendr_object_detection', anonymous=True)
- rospy.loginfo("Object detection node started!")
-
- input_image_topic = rospy.get_param("~input_image_topic", "/videofile/image_raw")
-
- # created node object
- object_detection_node = ObjectDetectionCenterNetNode(device=device, input_image_topic=input_image_topic)
- # begin ROS communications
- rospy.spin()
diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_detr.py b/projects/opendr_ws/src/perception/scripts/object_detection_2d_detr.py
deleted file mode 100644
index ec98c4ddf0..0000000000
--- a/projects/opendr_ws/src/perception/scripts/object_detection_2d_detr.py
+++ /dev/null
@@ -1,114 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import rospy
-import torch
-import numpy as np
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr.engine.data import Image
-from opendr_bridge import ROSBridge
-from opendr.perception.object_detection_2d.detr.algorithm.util.draw import draw
-from opendr.perception.object_detection_2d import DetrLearner
-
-
-class DetrNode:
-
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_boxes_annotated",
- detection_annotations_topic="/opendr/objects", device="cuda"):
- """
- Creates a ROS Node for object detection with DETR
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param detection_annotations_topic: Topic to which we are publishing the annotations (if None, we are not publishing
- annotations)
- :type detection_annotations_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- """
-
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
- else:
- self.image_publisher = None
-
- if detection_annotations_topic is not None:
- self.detection_publisher = rospy.Publisher(detection_annotations_topic, Detection2DArray, queue_size=10)
- else:
- self.detection_publisher = None
-
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
- self.bridge = ROSBridge()
-
- # Initialize the detection estimation
- self.detr_learner = DetrLearner(device=device)
- self.detr_learner.download(path=".", verbose=True)
-
- def listen(self):
- """
- Start the node and begin processing input data
- """
- rospy.init_node('detr', anonymous=True)
- rospy.loginfo("DETR node started!")
- rospy.spin()
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding='bgr8')
-
- # Run detection estimation
- boxes = self.detr_learner.infer(image)
-
- # Get an OpenCV image back
- image = np.float32(image.opencv())
-
- # Annotate image and publish results:
- if self.detection_publisher is not None:
- ros_detection = self.bridge.to_ros_bounding_box_list(boxes)
- self.detection_publisher.publish(ros_detection)
- # We get can the data back using self.bridge.from_ros_bounding_box_list(ros_detection)
- # e.g., opendr_detection = self.bridge.from_ros_bounding_box_list(ros_detection)
-
- if self.image_publisher is not None:
- image = draw(image, boxes)
- message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
- self.image_publisher.publish(message)
-
-
-if __name__ == '__main__':
- # Select the device for running the
- try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- detection_estimation_node = DetrNode(device=device)
- detection_estimation_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_gem.py b/projects/opendr_ws/src/perception/scripts/object_detection_2d_gem.py
deleted file mode 100644
index ee1d784566..0000000000
--- a/projects/opendr_ws/src/perception/scripts/object_detection_2d_gem.py
+++ /dev/null
@@ -1,200 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import rospy
-import torch
-import message_filters
-import cv2
-import time
-import numpy as np
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.perception.object_detection_2d import GemLearner
-from opendr.perception.object_detection_2d import draw
-from opendr.engine.data import Image
-
-
-class GemNode:
-
- def __init__(self,
- input_color_topic="/camera/color/image_raw",
- input_infra_topic="/camera/infra/image_raw",
- output_color_topic="/opendr/color_detection_annotated",
- output_infra_topic="/opendr/infra_detection_annotated",
- detection_annotations_topic="/opendr/detections",
- device="cuda",
- pts_color=None,
- pts_infra=None,
- ):
- """
- Creates a ROS Node for object detection with GEM
- :param input_color_topic: Topic from which we are reading the input color image
- :type input_color_topic: str
- :param input_infra_topic: Topic from which we are reading the input infrared image
- :type: input_infra_topic: str
- :param output_color_topic: Topic to which we are publishing the annotated color image (if None, we are not
- publishing annotated image)
- :type output_color_topic: str
- :param output_infra_topic: Topic to which we are publishing the annotated infrared image (if None, we are not
- publishing annotated image)
- :type output_infra_topic: str
- :param detection_annotations_topic: Topic to which we are publishing the annotations (if None, we are
- not publishing annotations)
- :type detection_annotations_topic: str
- :param device: Device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- :param pts_color: Point on the color image that define alignment with the infrared image. These are camera
- specific and can be obtained using get_color_infra_alignment.py which is located in the
- opendr/perception/object_detection2d/utils module.
- :type pts_color: {list, numpy.ndarray}
- :param pts_infra: Points on the infrared image that define alignment with color image. These are camera specific
- and can be obtained using get_color_infra_alignment.py which is located in the
- opendr/perception/object_detection2d/utils module.
- :type pts_infra: {list, numpy.ndarray}
- """
- rospy.init_node('gem', anonymous=True)
- if output_color_topic is not None:
- self.rgb_publisher = rospy.Publisher(output_color_topic, ROS_Image, queue_size=10)
- else:
- self.rgb_publisher = None
- if output_infra_topic is not None:
- self.ir_publisher = rospy.Publisher(output_infra_topic, ROS_Image, queue_size=10)
- else:
- self.ir_publisher = None
-
- if detection_annotations_topic is not None:
- self.detection_publisher = rospy.Publisher(detection_annotations_topic, Detection2DArray, queue_size=10)
- else:
- self.detection_publisher = None
- if pts_infra is None:
- pts_infra = np.array([[478, 248], [465, 338], [458, 325], [468, 256],
- [341, 240], [335, 310], [324, 321], [311, 383],
- [434, 365], [135, 384], [67, 257], [167, 206],
- [124, 131], [364, 276], [424, 269], [277, 131],
- [41, 310], [202, 320], [188, 318], [188, 308],
- [196, 241], [499, 317], [311, 164], [220, 216],
- [435, 352], [213, 363], [390, 364], [212, 368],
- [390, 370], [467, 324], [415, 364]])
- rospy.logwarn(
- '\nUsing default calibration values for pts_infra!' +
- '\nThese are probably incorrect.' +
- '\nThe correct values for pts_infra can be found by running get_color_infra_alignment.py.' +
- '\nThis file is located in the opendr/perception/object_detection2d/utils module.'
- )
- if pts_color is None:
- pts_color = np.array([[910, 397], [889, 572], [874, 552], [891, 411],
- [635, 385], [619, 525], [603, 544], [576, 682],
- [810, 619], [216, 688], [90, 423], [281, 310],
- [193, 163], [684, 449], [806, 431], [504, 170],
- [24, 538], [353, 552], [323, 550], [323, 529],
- [344, 387], [961, 533], [570, 233], [392, 336],
- [831, 610], [378, 638], [742, 630], [378, 648],
- [742, 640], [895, 550], [787, 630]])
- rospy.logwarn(
- '\nUsing default calibration values for pts_color!' +
- '\nThese are probably incorrect.' +
- '\nThe correct values for pts_color can be found by running get_color_infra_alignment.py.' +
- '\nThis file is located in the opendr/perception/object_detection2d/utils module.'
- )
- # Object classes
- self.classes = ['N/A', 'chair', 'cycle', 'bin', 'laptop', 'drill', 'rocker']
-
- # Estimating Homography matrix for aligning infra with RGB
- self.h, status = cv2.findHomography(pts_infra, pts_color)
-
- self.bridge = ROSBridge()
-
- # Initialize the detection estimation
- model_backbone = "resnet50"
-
- self.gem_learner = GemLearner(backbone=model_backbone,
- num_classes=7,
- device=device,
- )
- self.gem_learner.fusion_method = 'sc_avg'
- self.gem_learner.download(path=".", verbose=True)
-
- # Subscribers
- msg_rgb = message_filters.Subscriber(input_color_topic, ROS_Image)
- msg_ir = message_filters.Subscriber(input_infra_topic, ROS_Image)
-
- sync = message_filters.TimeSynchronizer([msg_rgb, msg_ir], 1)
- sync.registerCallback(self.callback)
-
- def listen(self):
- """
- Start the node and begin processing input data
- """
- self.fps_list = []
- rospy.loginfo("GEM node started!")
- rospy.spin()
-
- def callback(self, msg_rgb, msg_ir):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param msg_rgb: input color image message
- :type msg_rgb: sensor_msgs.msg.Image
- :param msg_ir: input infrared image message
- :type msg_ir: sensor_msgs.msg.Image
- """
- # Convert images to OpenDR standard
- image_rgb = self.bridge.from_ros_image(msg_rgb).opencv()
- image_ir_raw = self.bridge.from_ros_image(msg_ir, 'bgr8').opencv()
- image_ir = cv2.warpPerspective(image_ir_raw, self.h, (image_rgb.shape[1], image_rgb.shape[0]))
-
- # Perform inference on images
- start = time.time()
- boxes, w_sensor1, _ = self.gem_learner.infer(image_rgb, image_ir)
- end = time.time()
-
- # Calculate fps
- fps = 1 / (end - start)
- self.fps_list.append(fps)
- if len(self.fps_list) > 10:
- del self.fps_list[0]
- mean_fps = sum(self.fps_list) / len(self.fps_list)
-
- # Annotate image and publish results:
- if self.detection_publisher is not None:
- ros_detection = self.bridge.to_ros_bounding_box_list(boxes)
- self.detection_publisher.publish(ros_detection)
- # We get can the data back using self.bridge.from_ros_bounding_box_list(ros_detection)
- # e.g., opendr_detection = self.bridge.from_ros_bounding_box_list(ros_detection)
-
- if self.rgb_publisher is not None:
- plot_rgb = draw(image_rgb, boxes, w_sensor1, mean_fps)
- message = self.bridge.to_ros_image(Image(np.uint8(plot_rgb)))
- self.rgb_publisher.publish(message)
- if self.ir_publisher is not None:
- plot_ir = draw(image_ir, boxes, w_sensor1, mean_fps)
- message = self.bridge.to_ros_image(Image(np.uint8(plot_ir)))
- self.ir_publisher.publish(message)
-
-
-if __name__ == '__main__':
- # Select the device for running the
- try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
- detection_estimation_node = GemNode(device=device)
- detection_estimation_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_ssd.py b/projects/opendr_ws/src/perception/scripts/object_detection_2d_ssd.py
deleted file mode 100755
index f0dd7ca1d3..0000000000
--- a/projects/opendr_ws/src/perception/scripts/object_detection_2d_ssd.py
+++ /dev/null
@@ -1,139 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import rospy
-import mxnet as mx
-import numpy as np
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.engine.data import Image
-from opendr.perception.object_detection_2d import SingleShotDetectorLearner
-from opendr.perception.object_detection_2d import draw_bounding_boxes
-from opendr.perception.object_detection_2d import Seq2SeqNMSLearner, SoftNMS, FastNMS, ClusterNMS
-
-
-class ObjectDetectionSSDNode:
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_boxes_annotated",
- detections_topic="/opendr/objects", device="cuda", backbone="vgg16_atrous", nms_type='default'):
- """
- Creates a ROS Node for face detection
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param detections_topic: Topic to which we are publishing the annotations (if None, we are not publishing
- annotated pose annotations)
- :type detections_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- :param backbone: backbone network
- :type backbone: str
- :param ms_type: type of NMS method
- :type nms_type: str
- """
-
- # Initialize the face detector
- self.object_detector = SingleShotDetectorLearner(backbone=backbone, device=device)
- self.object_detector.download(path=".", verbose=True)
- self.object_detector.load("ssd_default_person")
- self.class_names = self.object_detector.classes
- self.custom_nms = None
-
- # Initialize Seq2Seq-NMS if selected
- if nms_type == 'seq2seq-nms':
- self.custom_nms = Seq2SeqNMSLearner(fmod_map_type='EDGEMAP', iou_filtering=0.8,
- app_feats='fmod', device=self.device)
- self.custom_nms.download(model_name='seq2seq_pets_jpd', path='.')
- self.custom_nms.load('./seq2seq_pets_jpd/', verbose=True)
- elif nms_type == 'soft-nms':
- self.custom_nms = SoftNMS(nms_thres=0.45, device=self.device)
- elif nms_type == 'fast-nms':
- self.custom_nms = FastNMS(nms_thres=0.45, device=self.device)
- elif nms_type == 'cluster-nms':
- self.custom_nms = ClusterNMS(nms_thres=0.45, device=self.device)
-
- # Initialize OpenDR ROSBridge object
- self.bridge = ROSBridge()
-
- # setup communications
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
- else:
- self.image_publisher = None
-
- if detections_topic is not None:
- self.bbox_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=10)
- else:
- self.bbox_publisher = None
-
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding='bgr8')
-
- # Run pose estimation
- boxes = self.object_detector.infer(image, threshold=0.45, keep_size=False, custom_nms=self.custom_nms)
-
- # Get an OpenCV image back
- image = np.float32(image.opencv())
-
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes(boxes)
- if self.bbox_publisher is not None:
- self.bbox_publisher.publish(ros_boxes)
- rospy.loginfo("Published face boxes")
-
- # Annotate image and publish result
- # NOTE: converting back to OpenDR BoundingBoxList is unnecessary here,
- # only used to test the corresponding bridge methods
- odr_boxes = self.bridge.from_ros_boxes(ros_boxes)
- image = draw_bounding_boxes(image, odr_boxes, class_names=self.class_names)
- if self.image_publisher is not None:
- message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
- self.image_publisher.publish(message)
- rospy.loginfo("Published annotated image")
-
-
-if __name__ == '__main__':
- # Automatically run on GPU/CPU
- try:
- if mx.context.num_gpus() > 0:
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- # initialize ROS node
- rospy.init_node('opendr_object_detection', anonymous=True)
- rospy.loginfo("Object detection node started!")
-
- input_image_topic = rospy.get_param("~input_image_topic", "/videofile/image_raw")
-
- # created node object
- object_detection_node = ObjectDetectionSSDNode(device=device, input_image_topic=input_image_topic)
- # begin ROS communications
- rospy.spin()
diff --git a/projects/opendr_ws/src/perception/scripts/object_detection_2d_yolov3.py b/projects/opendr_ws/src/perception/scripts/object_detection_2d_yolov3.py
deleted file mode 100755
index 93155f148b..0000000000
--- a/projects/opendr_ws/src/perception/scripts/object_detection_2d_yolov3.py
+++ /dev/null
@@ -1,123 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import rospy
-import mxnet as mx
-import numpy as np
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.engine.data import Image
-from opendr.perception.object_detection_2d import YOLOv3DetectorLearner
-from opendr.perception.object_detection_2d import draw_bounding_boxes
-
-
-class ObjectDetectionYOLONode:
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_boxes_annotated",
- detections_topic="/opendr/objects", device="cuda", backbone="darknet53"):
- """
- Creates a ROS Node for face detection
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param detections_topic: Topic to which we are publishing the annotations (if None, we are not publishing
- annotated pose annotations)
- :type detections_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- :param backbone: backbone network
- :type backbone: str
- """
-
- # Initialize the face detector
- self.object_detector = YOLOv3DetectorLearner(backbone=backbone, device=device)
- self.object_detector.download(path=".", verbose=True)
- self.object_detector.load("yolo_default")
- self.class_names = self.object_detector.classes
-
- # Initialize OpenDR ROSBridge object
- self.bridge = ROSBridge()
-
- # setup communications
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
- else:
- self.image_publisher = None
-
- if detections_topic is not None:
- self.bbox_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=10)
- else:
- self.bbox_publisher = None
-
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding='bgr8')
- rospy.loginfo("image info: {}".format(image.numpy().shape))
-
- # Run pose estimation
- boxes = self.object_detector.infer(image, threshold=0.1, keep_size=False)
-
- # Get an OpenCV image back
- image = np.float32(image.opencv())
-
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes(boxes)
- if self.bbox_publisher is not None:
- self.bbox_publisher.publish(ros_boxes)
- rospy.loginfo("Published face boxes")
-
- # Annotate image and publish result
- # NOTE: converting back to OpenDR BoundingBoxList is unnecessary here,
- # only used to test the corresponding bridge methods
- odr_boxes = self.bridge.from_ros_boxes(ros_boxes)
- image = draw_bounding_boxes(image, odr_boxes, class_names=self.class_names)
- if self.image_publisher is not None:
- message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
- self.image_publisher.publish(message)
- rospy.loginfo("Published annotated image")
-
-
-if __name__ == '__main__':
- # Automatically run on GPU/CPU
- try:
- if mx.context.num_gpus() > 0:
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- # initialize ROS node
- rospy.init_node('opendr_object_detection', anonymous=True)
- rospy.loginfo("Object detection node started!")
-
- input_image_topic = rospy.get_param("~input_image_topic", "/videofile/image_raw")
-
- # created node object
- object_detection_node = ObjectDetectionYOLONode(device=device, input_image_topic=input_image_topic)
- # begin ROS communications
- rospy.spin()
diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py b/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py
deleted file mode 100755
index 0f8d3a7373..0000000000
--- a/projects/opendr_ws/src/perception/scripts/object_tracking_2d_fair_mot.py
+++ /dev/null
@@ -1,192 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import cv2
-import torch
-import os
-from opendr.engine.target import TrackingAnnotation
-import rospy
-from vision_msgs.msg import Detection2DArray
-from std_msgs.msg import Int32MultiArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.perception.object_tracking_2d import (
- ObjectTracking2DFairMotLearner,
-)
-from opendr.engine.data import Image
-
-
-class ObjectTracking2DFairMotNode:
- def __init__(
- self,
- input_image_topic="/usb_cam/image_raw",
- output_detection_topic="/opendr/detection",
- output_tracking_id_topic="/opendr/tracking_id",
- output_image_topic="/opendr/image_annotated",
- device="cuda:0",
- model_name="fairmot_dla34",
- temp_dir="temp",
- ):
- """
- Creates a ROS Node for 2D object tracking
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param output_detection_topic: Topic to which we are publishing the detections
- :type output_detection_topic: str
- :param output_tracking_id_topic: Topic to which we are publishing the tracking ids
- :type output_tracking_id_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- :param model_name: the pretrained model to download or a saved model in temp_dir folder to use
- :type model_name: str
- :param temp_dir: the folder to download models
- :type temp_dir: str
- """
-
- # # Initialize the face detector
- self.learner = ObjectTracking2DFairMotLearner(
- device=device, temp_path=temp_dir,
- )
- if not os.path.exists(os.path.join(temp_dir, model_name)):
- ObjectTracking2DFairMotLearner.download(model_name, temp_dir)
-
- self.learner.load(os.path.join(temp_dir, model_name), verbose=True)
-
- # Initialize OpenDR ROSBridge object
- self.bridge = ROSBridge()
-
- self.detection_publisher = rospy.Publisher(
- output_detection_topic, Detection2DArray, queue_size=10
- )
- self.tracking_id_publisher = rospy.Publisher(
- output_tracking_id_topic, Int32MultiArray, queue_size=10
- )
-
- if output_image_topic is not None:
- self.output_image_publisher = rospy.Publisher(
- output_image_topic, ROS_Image, queue_size=10
- )
-
- rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding="bgr8")
- tracking_boxes = self.learner.infer(image)
-
- if self.output_image_publisher is not None:
- frame = image.opencv()
- draw_predictions(frame, tracking_boxes)
- message = self.bridge.to_ros_image(
- Image(frame), encoding="bgr8"
- )
- self.output_image_publisher.publish(message)
- rospy.loginfo("Published annotated image")
-
- detection_boxes = tracking_boxes.bounding_box_list()
- ids = [tracking_box.id for tracking_box in tracking_boxes]
-
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes(detection_boxes)
- if self.detection_publisher is not None:
- self.detection_publisher.publish(ros_boxes)
- rospy.loginfo("Published detection boxes")
-
- ros_ids = Int32MultiArray()
- ros_ids.data = ids
-
- if self.tracking_id_publisher is not None:
- self.tracking_id_publisher.publish(ros_ids)
- rospy.loginfo("Published tracking ids")
-
-
-colors = [
- (255, 0, 255),
- (0, 0, 255),
- (0, 255, 0),
- (255, 0, 0),
- (35, 69, 55),
- (43, 63, 54),
-]
-
-
-def draw_predictions(frame, predictions: TrackingAnnotation, is_centered=False, is_flipped_xy=True):
- global colors
- w, h, _ = frame.shape
-
- for prediction in predictions.boxes:
- prediction = prediction
-
- if not hasattr(prediction, "id"):
- prediction.id = 0
-
- color = colors[int(prediction.id) * 7 % len(colors)]
-
- x = prediction.left
- y = prediction.top
-
- if is_flipped_xy:
- x = prediction.top
- y = prediction.left
-
- if is_centered:
- x -= prediction.width
- y -= prediction.height
-
- cv2.rectangle(
- frame,
- (int(x), int(y)),
- (
- int(x + prediction.width),
- int(y + prediction.height),
- ),
- color,
- 2,
- )
-
-
-if __name__ == "__main__":
- # Automatically run on GPU/CPU
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
- # initialize ROS node
- rospy.init_node("opendr_fair_mot", anonymous=True)
- rospy.loginfo("FairMOT node started")
-
- model_name = rospy.get_param("~model_name", "fairmot_dla34")
- temp_dir = rospy.get_param("~temp_dir", "temp")
- input_image_topic = rospy.get_param(
- "~input_image_topic", "/opendr/dataset_image"
- )
- rospy.loginfo("Using model_name: {}".format(model_name))
-
- # created node object
- fair_mot_node = ObjectTracking2DFairMotNode(
- device=device,
- model_name=model_name,
- input_image_topic=input_image_topic,
- temp_dir=temp_dir,
- )
- # begin ROS communications
- rospy.spin()
diff --git a/projects/opendr_ws/src/perception/scripts/object_tracking_3d_ab3dmot.py b/projects/opendr_ws/src/perception/scripts/object_tracking_3d_ab3dmot.py
deleted file mode 100644
index b9927182ce..0000000000
--- a/projects/opendr_ws/src/perception/scripts/object_tracking_3d_ab3dmot.py
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import torch
-from opendr.engine.learners import Learner
-import rospy
-from vision_msgs.msg import Detection3DArray
-from std_msgs.msg import Int32MultiArray
-from sensor_msgs.msg import PointCloud as ROS_PointCloud
-from opendr_bridge import ROSBridge
-from opendr.perception.object_tracking_3d import ObjectTracking3DAb3dmotLearner
-from opendr.perception.object_detection_3d import VoxelObjectDetection3DLearner
-
-
-class ObjectTracking3DAb3dmotNode:
- def __init__(
- self,
- detector: Learner,
- input_point_cloud_topic="/opendr/dataset_point_cloud",
- output_detection3d_topic="/opendr/detection3d",
- output_tracking3d_id_topic="/opendr/tracking3d_id",
- device="cuda:0",
- ):
- """
- Creates a ROS Node for 3D object tracking
- :param detector: Learner that proides 3D object detections
- :type detector: Learner
- :param input_point_cloud_topic: Topic from which we are reading the input point cloud
- :type input_image_topic: str
- :param output_detection3d_topic: Topic to which we are publishing the annotations
- :type output_detection3d_topic: str
- :param output_tracking3d_id_topic: Topic to which we are publishing the tracking ids
- :type output_tracking3d_id_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- """
-
- self.detector = detector
- self.learner = ObjectTracking3DAb3dmotLearner(
- device=device
- )
-
- # Initialize OpenDR ROSBridge object
- self.bridge = ROSBridge()
-
- self.detection_publisher = rospy.Publisher(
- output_detection3d_topic, Detection3DArray, queue_size=10
- )
- self.tracking_id_publisher = rospy.Publisher(
- output_tracking3d_id_topic, Int32MultiArray, queue_size=10
- )
-
- rospy.Subscriber(input_point_cloud_topic, ROS_PointCloud, self.callback)
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- point_cloud = self.bridge.from_ros_point_cloud(data)
- detection_boxes = self.detector.infer(point_cloud)
- tracking_boxes = self.learner.infer(detection_boxes)
- ids = [tracking_box.id for tracking_box in tracking_boxes]
-
- # Convert detected boxes to ROS type and publish
- ros_boxes = self.bridge.to_ros_boxes_3d(detection_boxes, classes=["Car", "Van", "Truck", "Pedestrian", "Cyclist"])
- if self.detection_publisher is not None:
- self.detection_publisher.publish(ros_boxes)
- rospy.loginfo("Published detection boxes")
-
- ros_ids = Int32MultiArray()
- ros_ids.data = ids
-
- if self.tracking_id_publisher is not None:
- self.tracking_id_publisher.publish(ros_ids)
- rospy.loginfo("Published tracking ids")
-
-if __name__ == "__main__":
- # Automatically run on GPU/CPU
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
-
- # initialize ROS node
- rospy.init_node("opendr_voxel_detection_3d", anonymous=True)
- rospy.loginfo("AB3DMOT node started")
-
- input_point_cloud_topic = rospy.get_param(
- "~input_point_cloud_topic", "/opendr/dataset_point_cloud"
- )
- temp_dir = rospy.get_param("~temp_dir", "temp")
- detector_model_name = rospy.get_param("~detector_model_name", "tanet_car_xyres_16")
- detector_model_config_path = rospy.get_param(
- "~detector_model_config_path", os.path.join(
- "..", "..", "src", "opendr", "perception", "object_detection_3d",
- "voxel_object_detection_3d", "second_detector", "configs", "tanet",
- "car", "test_short.proto"
- )
- )
-
- detector = VoxelObjectDetection3DLearner(
- device=device, temp_path=temp_dir, model_config_path=detector_model_config_path
- )
- if not os.path.exists(os.path.join(temp_dir, detector_model_name)):
- VoxelObjectDetection3DLearner.download(detector_model_name, temp_dir)
-
- detector.load(os.path.join(temp_dir, detector_model_name), verbose=True)
-
- # created node object
- ab3dmot_node = ObjectTracking3DAb3dmotNode(
- detector=detector,
- device=device,
- input_point_cloud_topic=input_point_cloud_topic,
- )
- # begin ROS communications
- rospy.spin()
diff --git a/projects/opendr_ws/src/perception/scripts/pose_estimation.py b/projects/opendr_ws/src/perception/scripts/pose_estimation.py
deleted file mode 100644
index 855ada40cf..0000000000
--- a/projects/opendr_ws/src/perception/scripts/pose_estimation.py
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import rospy
-import torch
-from vision_msgs.msg import Detection2DArray
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.perception.pose_estimation import draw
-from opendr.perception.pose_estimation import LightweightOpenPoseLearner
-from opendr.engine.data import Image
-
-
-class PoseEstimationNode:
-
- def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_pose_annotated",
- pose_annotations_topic="/opendr/poses", device="cuda"):
- """
- Creates a ROS Node for pose detection
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
- annotated image)
- :type output_image_topic: str
- :param pose_annotations_topic: Topic to which we are publishing the annotations (if None, we are not publishing
- annotated pose annotations)
- :type pose_annotations_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- """
- if output_image_topic is not None:
- self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
- else:
- self.image_publisher = None
-
- if pose_annotations_topic is not None:
- self.pose_publisher = rospy.Publisher(pose_annotations_topic, Detection2DArray, queue_size=10)
- else:
- self.pose_publisher = None
-
- self.input_image_topic = input_image_topic
-
- self.bridge = ROSBridge()
-
- # Initialize the pose estimation
- self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=0,
- mobilenet_use_stride=False,
- half_precision=False)
- self.pose_estimator.download(path=".", verbose=True)
- self.pose_estimator.load("openpose_default")
-
- def listen(self):
- """
- Start the node and begin processing input data
- """
- rospy.init_node('opendr_pose_estimation', anonymous=True)
- rospy.Subscriber(self.input_image_topic, ROS_Image, self.callback)
- rospy.loginfo("Pose estimation node started!")
- rospy.spin()
-
- def callback(self, data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param data: input message
- :type data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image
- image = self.bridge.from_ros_image(data, encoding='bgr8')
-
- # Run pose estimation
- poses = self.pose_estimator.infer(image)
-
- # Get an OpenCV image back
- image = image.opencv()
- # Annotate image and publish results
- for pose in poses:
- if self.pose_publisher is not None:
- ros_pose = self.bridge.to_ros_pose(pose)
- self.pose_publisher.publish(ros_pose)
- # We get can the data back using self.bridge.from_ros_pose(ros_pose)
- # e.g., opendr_pose = self.bridge.from_ros_pose(ros_pose)
- draw(image, pose)
-
- if self.image_publisher is not None:
- message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
- self.image_publisher.publish(message)
-
-
-if __name__ == '__main__':
- # Select the device for running the
- try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = 'cuda'
- else:
- print("GPU not found. Using CPU instead.")
- device = 'cpu'
- except:
- device = 'cpu'
-
- pose_estimation_node = PoseEstimationNode(device=device)
- pose_estimation_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py b/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py
deleted file mode 100755
index 69150856ad..0000000000
--- a/projects/opendr_ws/src/perception/scripts/rgbd_hand_gesture_recognition.py
+++ /dev/null
@@ -1,131 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import rospy
-import torch
-import numpy as np
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-import os
-from opendr.perception.multimodal_human_centric import RgbdHandGestureLearner
-from opendr.engine.data import Image
-from vision_msgs.msg import Classification2D
-import message_filters
-import cv2
-
-
-class RgbdHandGestureNode:
-
- def __init__(self, input_image_topic="/usb_cam/image_raw", input_depth_image_topic="/usb_cam/image_raw",
- gesture_annotations_topic="/opendr/gestures", device="cuda"):
- """
- Creates a ROS Node for gesture recognition from RGBD
- :param input_image_topic: Topic from which we are reading the input image
- :type input_image_topic: str
- :param input_depth_image_topic: Topic from which we are reading the input depth image
- :type input_depth_image_topic: str
- :param gesture_annotations_topic: Topic to which we are publishing the predicted gesture class
- :type gesture_annotations_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- """
-
- self.gesture_publisher = rospy.Publisher(gesture_annotations_topic, Classification2D, queue_size=10)
-
- image_sub = message_filters.Subscriber(input_image_topic, ROS_Image)
- depth_sub = message_filters.Subscriber(input_depth_image_topic, ROS_Image)
- # synchronize image and depth data topics
- ts = message_filters.TimeSynchronizer([image_sub, depth_sub], 10)
- ts.registerCallback(self.callback)
-
- self.bridge = ROSBridge()
-
- # Initialize the gesture recognition
- self.gesture_learner = RgbdHandGestureLearner(n_class=16, architecture="mobilenet_v2", device=device)
- model_path = './mobilenet_v2'
- if not os.path.exists(model_path):
- self.gesture_learner.download(path=model_path)
- self.gesture_learner.load(path=model_path)
-
- # mean and std for preprocessing, based on HANDS dataset
- self.mean = np.asarray([0.485, 0.456, 0.406, 0.0303]).reshape(1, 1, 4)
- self.std = np.asarray([0.229, 0.224, 0.225, 0.0353]).reshape(1, 1, 4)
-
- def listen(self):
- """
- Start the node and begin processing input data
- """
- rospy.init_node('opendr_gesture_recognition', anonymous=True)
- rospy.loginfo("RGBD gesture recognition node started!")
- rospy.spin()
-
- def callback(self, image_data, depth_data):
- """
- Callback that process the input data and publishes to the corresponding topics
- :param image_data: input image message
- :type image_data: sensor_msgs.msg.Image
- :param depth_data: input depth image message
- :type depth_data: sensor_msgs.msg.Image
- """
-
- # Convert sensor_msgs.msg.Image into OpenDR Image and preprocess
- image = self.bridge.from_ros_image(image_data, encoding='bgr8')
- depth_data.encoding = 'mono16'
- depth_image = self.bridge.from_ros_image_to_depth(depth_data, encoding='mono16')
- img = self.preprocess(image, depth_image)
-
- # Run gesture recognition
- gesture_class = self.gesture_learner.infer(img)
-
- # Publish results
- ros_gesture = self.bridge.from_category_to_rosclass(gesture_class)
- self.gesture_publisher.publish(ros_gesture)
-
- def preprocess(self, image, depth_img):
- '''
- Preprocess image, depth_image and concatenate them
- :param image_data: input image
- :type image_data: engine.data.Image
- :param depth_data: input depth image
- :type depth_data: engine.data.Image
- '''
- image = image.convert(format='channels_last') / (2**8 - 1)
- depth_img = depth_img.convert(format='channels_last') / (2**16 - 1)
-
- # resize the images to 224x224
- image = cv2.resize(image, (224, 224))
- depth_img = cv2.resize(depth_img, (224, 224))
-
- # concatenate and standardize
- img = np.concatenate([image, np.expand_dims(depth_img, axis=-1)], axis=-1)
- img = (img - self.mean) / self.std
- img = Image(img, dtype=np.float32)
- return img
-
-if __name__ == '__main__':
- # Select the device for running
- try:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- except:
- device = 'cpu'
-
- # default topics are according to kinectv2 drivers at https://github.com/OpenKinect/libfreenect2
- # and https://github.com/code-iai-iai_kinect2
- depth_topic = "/kinect2/qhd/image_depth_rect"
- image_topic = "/kinect2/qhd/image_color_rect"
- gesture_node = RgbdHandGestureNode(input_image_topic=image_topic, input_depth_image_topic=depth_topic, device=device)
- gesture_node.listen()
diff --git a/projects/opendr_ws/src/perception/scripts/semantic_segmentation_bisenet.py b/projects/opendr_ws/src/perception/scripts/semantic_segmentation_bisenet.py
deleted file mode 100644
index 32390c9157..0000000000
--- a/projects/opendr_ws/src/perception/scripts/semantic_segmentation_bisenet.py
+++ /dev/null
@@ -1,111 +0,0 @@
-#!/usr/bin/env python
-# Copyright 2020-2022 OpenDR European Project
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import torch
-import rospy
-from sensor_msgs.msg import Image as ROS_Image
-from opendr_bridge import ROSBridge
-from opendr.engine.data import Image
-from opendr.perception.semantic_segmentation import BisenetLearner
-import numpy as np
-import cv2
-
-
-class BisenetNode:
- def __init__(self,
- input_image_topic,
- output_heatmap_topic=None,
- device="cuda"
- ):
- """
- Initialize the Bisenet ROS node and create an instance of the respective learner class.
- :param input_image_topic: ROS topic for the input image
- :type input_image_topic: str
- :param output_heatmap_topic: ROS topic for the predicted heatmap
- :type output_heatmap_topic: str
- :param device: device on which we are running inference ('cpu' or 'cuda')
- :type device: str
- """
- self.input_image_topic = input_image_topic
- self.output_heatmap_topic = output_heatmap_topic
-
- if self.output_heatmap_topic is not None:
- self._heatmap_publisher = rospy.Publisher(f'{self.output_heatmap_topic}/semantic', ROS_Image, queue_size=10)
- else:
- self._heatmap_publisher = None
-
- rospy.Subscriber(self.input_image_topic, ROS_Image, self.callback)
-
- # Initialize OpenDR ROSBridge object
- self._bridge = ROSBridge()
-
- # Initialize the semantic segmentation model
- self._learner = BisenetLearner(device=device)
- self._learner.download(path="bisenet_camvid")
- self._learner.load("bisenet_camvid")
-
- self._colors = np.random.randint(0, 256, (256, 3), dtype=np.uint8)
-
- def listen(self):
- """
- Start the node and begin processing input data
- """
- rospy.init_node('bisenet', anonymous=True)
- rospy.loginfo("Bisenet node started!")
- rospy.spin()
-
- def callback(self, data: ROS_Image):
- """
- Predict the heatmap from the input image and publish the results.
- :param data: Input image message
- :type data: sensor_msgs.msg.Image
- """
- # Convert sensor_msgs.msg.Image to OpenDR Image
- image = self._bridge.from_ros_image(data)
-
- try:
- # Retrieve the OpenDR heatmap
- prediction = self._learner.infer(image)
-
- if self._heatmap_publisher is not None and self._heatmap_publisher.get_num_connections() > 0:
- heatmap_np = prediction.numpy()
- heatmap_o = self._colors[heatmap_np]
- heatmap_o = cv2.resize(np.uint8(heatmap_o), (960, 720))
- self._heatmap_publisher.publish(self._bridge.to_ros_image(Image(heatmap_o), encoding='bgr8'))
-
- except Exception:
- rospy.logwarn('Failed to generate prediction.')
-
-
-if __name__ == '__main__':
- # Select the device for running the
- try:
- if torch.cuda.is_available():
- print("GPU found.")
- device = "cuda"
- else:
- print("GPU not found. Using CPU instead.")
- device = "cpu"
- except:
- device = "cpu"
-
- parser = argparse.ArgumentParser()
- parser.add_argument('image_topic', type=str, help='listen to images on this topic')
- parser.add_argument('--heatmap_topic', type=str, help='publish the heatmap on this topic')
- args = parser.parse_args()
-
- bisenet_node = BisenetNode(device=device, input_image_topic=args.image_topic, output_heatmap_topic=args.heatmap_topic)
- bisenet_node.listen()
diff --git a/projects/opendr_ws_2/README.md b/projects/opendr_ws_2/README.md
new file mode 100755
index 0000000000..4379f69595
--- /dev/null
+++ b/projects/opendr_ws_2/README.md
@@ -0,0 +1,88 @@
+# opendr_ws_2
+
+## Description
+This ROS2 workspace contains ROS2 nodes and tools developed by OpenDR project. Currently, ROS2 nodes are compatible with ROS2 Foxy.
+This workspace contains the `opendr_ros2_bridge` package, which contains the `ROS2Bridge` class that provides an interface to convert OpenDR data types and targets into ROS-compatible
+ones similar to CvBridge. The workspace also contains the `opendr_ros2_interfaces` which provides message and service definitions for ROS-compatible OpenDR data types. You can find more information in the corresponding [opendr_ros2_bridge documentation](../../docs/reference/ros2bridge.md) and [opendr_ros2_interfaces documentation]().
+
+## First time setup
+
+For the initial setup you can follow the instructions below:
+
+0. Make sure that [ROS2-foxy is installed.](https://docs.ros.org/en/foxy/Installation/Ubuntu-Install-Debians.html)
+
+1. Source the necessary distribution tools:
+ ```shell
+ source /opt/ros/foxy/setup.bash
+ ```
+ _For convenience, you can add this line to your `.bashrc` so you don't have to source the tools each time you open a terminal window._
+
+
+
+2. Navigate to your OpenDR home directory (`~/opendr`) and activate the OpenDR environment using:
+ ```shell
+ source bin/activate.sh
+ ```
+ You need to do this step every time before running an OpenDR node.
+
+3. Navigate into the OpenDR ROS2 workspace::
+ ```shell
+ cd projects/opendr_ws_2
+ ```
+
+4. Build the packages inside the workspace:
+ ```shell
+ colcon build
+ ```
+
+5. Source the workspace:
+ ```shell
+ . install/setup.bash
+ ```
+ You are now ready to run an OpenDR ROS node.
+
+#### After first time setup
+For running OpenDR nodes after you have completed the initial setup, you can skip steps 0 from the list above.
+You can also skip building the workspace (step 4) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node.
+
+#### More information
+After completing the setup you can read more information on the [opendr perception package README](src/opendr_perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance.
+
+#### Node documentation
+You can also take a look at the list of tools [below](#structure) and click on the links to navigate directly to documentation for specific nodes with instructions on how to run and modify them.
+
+**For first time users we suggest reading the introductory sections (prerequisites and notes) first.**
+
+## Structure
+
+Currently, apart from tools, opendr_ws_2 contains the following ROS2 nodes (categorized according to the input they receive):
+
+### [Perception](src/opendr_perception/README.md)
+## RGB input
+1. [Pose Estimation](src/opendr_perception/README.md#pose-estimation-ros2-node)
+2. [High Resolution Pose Estimation](src/opendr_perception/README.md#high-resolution-pose-estimation-ros2-node)
+3. [Fall Detection](src/opendr_perception/README.md#fall-detection-ros2-node)
+4. [Face Detection](src/opendr_perception/README.md#face-detection-ros2-node)
+5. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros2-node)
+6. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros2-nodes)
+7. [2D Single Object Tracking](src/opendr_perception/README.md#2d-single-object-tracking-ros2-node)
+8. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros2-nodes)
+9. [Panoptic Segmentation](src/opendr_perception/README.md#panoptic-segmentation-ros2-node)
+10. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros2-node)
+11. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros2-node)
+12. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros2-node)
+13. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros2-node)
+14. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros2-node)
+## RGB + Infrared input
+1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros2-node)
+## RGBD input
+1. [RGBD Hand Gesture Recognition](src/opendr_perception/README.md#rgbd-hand-gesture-recognition-ros2-node)
+## RGB + Audio input
+1. [Audiovisual Emotion Recognition](src/opendr_perception/README.md#audiovisual-emotion-recognition-ros2-node)
+## Audio input
+1. [Speech Command Recognition](src/opendr_perception/README.md#speech-command-recognition-ros2-node)
+## Point cloud input
+1. [3D Object Detection Voxel](src/opendr_perception/README.md#3d-object-detection-voxel-ros2-node)
+2. [3D Object Tracking AB3DMOT](src/opendr_perception/README.md#3d-object-tracking-ab3dmot-ros2-node)
+## Biosignal input
+1. [Heart Anomaly Detection](src/opendr_perception/README.md#heart-anomaly-detection-ros2-node)
diff --git a/projects/opendr_ws_2/images/opendr_node_diagram.png b/projects/opendr_ws_2/images/opendr_node_diagram.png
new file mode 100644
index 0000000000..70b202ad3c
Binary files /dev/null and b/projects/opendr_ws_2/images/opendr_node_diagram.png differ
diff --git a/projects/opendr_ws_2/src/opendr_bridge/opendr_bridge/__init__.py b/projects/opendr_ws_2/src/opendr_bridge/opendr_bridge/__init__.py
new file mode 100644
index 0000000000..06c41996d7
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/opendr_bridge/__init__.py
@@ -0,0 +1,3 @@
+from opendr_bridge.bridge import ROS2Bridge
+
+__all__ = ['ROS2Bridge', ]
diff --git a/projects/opendr_ws_2/src/opendr_bridge/opendr_bridge/bridge.py b/projects/opendr_ws_2/src/opendr_bridge/opendr_bridge/bridge.py
new file mode 100644
index 0000000000..3deb3f8207
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/opendr_bridge/bridge.py
@@ -0,0 +1,629 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+from opendr.engine.data import Image, PointCloud, Timeseries
+from opendr.engine.target import (
+ Pose, BoundingBox, BoundingBoxList, Category,
+ BoundingBox3D, BoundingBox3DList, TrackingAnnotation
+)
+from cv_bridge import CvBridge
+from std_msgs.msg import String, ColorRGBA, Header
+from sensor_msgs.msg import Image as ImageMsg, PointCloud as PointCloudMsg, ChannelFloat32 as ChannelFloat32Msg
+from vision_msgs.msg import (
+ Detection2DArray, Detection2D, BoundingBox2D, ObjectHypothesisWithPose,
+ Detection3D, Detection3DArray, BoundingBox3D as BoundingBox3DMsg,
+ Classification2D, ObjectHypothesis
+)
+from shape_msgs.msg import Mesh, MeshTriangle
+from geometry_msgs.msg import (
+ Pose2D, Point32 as Point32Msg,
+ Quaternion as QuaternionMsg, Pose as Pose3D,
+ Point
+)
+from opendr_interface.msg import OpenDRPose2D, OpenDRPose2DKeypoint, OpenDRPose3D, OpenDRPose3DKeypoint
+
+
+class ROS2Bridge:
+ """
+ This class provides an interface to convert OpenDR data types and targets into ROS2-compatible ones similar
+ to CvBridge.
+ For each data type X two methods are provided:
+ from_ros_X: which converts the ROS2 equivalent of X into OpenDR data type
+ to_ros_X: which converts the OpenDR data type into the ROS2 equivalent of X
+ """
+
+ def __init__(self):
+ self._cv_bridge = CvBridge()
+
+ def to_ros_image(self, image: Image, encoding: str='passthrough') -> ImageMsg:
+ """
+ Converts an OpenDR image into a ROS2 image message
+ :param image: OpenDR image to be converted
+ :type image: engine.data.Image
+ :param encoding: encoding to be used for the conversion (inherited from CvBridge)
+ :type encoding: str
+ :return: ROS2 image
+ :rtype: sensor_msgs.msg.Image
+ """
+ # Convert from the OpenDR standard (CHW/RGB) to OpenCV standard (HWC/BGR)
+ message = self._cv_bridge.cv2_to_imgmsg(image.opencv(), encoding=encoding)
+ return message
+
+ def from_ros_image(self, message: ImageMsg, encoding: str='passthrough') -> Image:
+ """
+ Converts a ROS2 image message into an OpenDR image
+ :param message: ROS2 image to be converted
+ :type message: sensor_msgs.msg.Image
+ :param encoding: encoding to be used for the conversion (inherited from CvBridge)
+ :type encoding: str
+ :return: OpenDR image (RGB)
+ :rtype: engine.data.Image
+ """
+ cv_image = self._cv_bridge.imgmsg_to_cv2(message, desired_encoding=encoding)
+ image = Image(np.asarray(cv_image, dtype=np.uint8))
+ return image
+
+ def to_ros_pose(self, pose: Pose):
+ """
+ Converts an OpenDR Pose into a OpenDRPose2D msg that can carry the same information, i.e. a list of keypoints,
+ the pose detection confidence and the pose id.
+ Each keypoint is represented as an OpenDRPose2DKeypoint with x, y pixel position on input image with (0, 0)
+ being the top-left corner.
+ :param pose: OpenDR Pose to be converted to OpenDRPose2D
+ :type pose: engine.target.Pose
+ :return: ROS message with the pose
+ :rtype: opendr_interface.msg.OpenDRPose2D
+ """
+ data = pose.data
+ # Setup ros pose
+ ros_pose = OpenDRPose2D()
+ ros_pose.pose_id = int(pose.id)
+ if pose.confidence:
+ ros_pose.conf = pose.confidence
+
+ # Add keypoints to pose
+ for i in range(data.shape[0]):
+ ros_keypoint = OpenDRPose2DKeypoint()
+ ros_keypoint.kpt_name = pose.kpt_names[i]
+ ros_keypoint.x = int(data[i][0])
+ ros_keypoint.y = int(data[i][1])
+ # Add keypoint to pose
+ ros_pose.keypoint_list.append(ros_keypoint)
+ return ros_pose
+
+ def from_ros_pose(self, ros_pose: OpenDRPose2D):
+ """
+ Converts an OpenDRPose2D message into an OpenDR Pose.
+ :param ros_pose: the ROS pose to be converted
+ :type ros_pose: opendr_interface.msg.OpenDRPose2D
+ :return: an OpenDR Pose
+ :rtype: engine.target.Pose
+ """
+ ros_keypoints = ros_pose.keypoint_list
+ keypoints = []
+ pose_id, confidence = ros_pose.pose_id, ros_pose.conf
+
+ for ros_keypoint in ros_keypoints:
+ keypoints.append(int(ros_keypoint.x))
+ keypoints.append(int(ros_keypoint.y))
+ data = np.asarray(keypoints).reshape((-1, 2))
+
+ pose = Pose(data, confidence)
+ pose.id = pose_id
+ return pose
+
+ def to_ros_boxes(self, box_list):
+ """
+ Converts an OpenDR BoundingBoxList into a Detection2DArray msg that can carry the same information.
+ Each bounding box is represented by its center coordinates as well as its width/height dimensions.
+ :param box_list: OpenDR bounding boxes to be converted
+ :type box_list: engine.target.BoundingBoxList
+ :return: ROS2 message with the bounding boxes
+ :rtype: vision_msgs.msg.Detection2DArray
+ """
+ boxes = box_list.data
+ ros_boxes = Detection2DArray()
+ for idx, box in enumerate(boxes):
+ ros_box = Detection2D()
+ ros_box.bbox = BoundingBox2D()
+ ros_box.results.append(ObjectHypothesisWithPose())
+ ros_box.bbox.center = Pose2D()
+ ros_box.bbox.center.x = box.left + box.width / 2.
+ ros_box.bbox.center.y = box.top + box.height / 2.
+ ros_box.bbox.size_x = float(box.width)
+ ros_box.bbox.size_y = float(box.height)
+ ros_box.results[0].id = str(box.name)
+ if box.confidence:
+ ros_box.results[0].score = float(box.confidence)
+ ros_boxes.detections.append(ros_box)
+ return ros_boxes
+
+ def from_ros_boxes(self, ros_detections):
+ """
+ Converts a ROS2 message with bounding boxes into an OpenDR BoundingBoxList
+ :param ros_detections: the boxes to be converted (represented as vision_msgs.msg.Detection2DArray)
+ :type ros_detections: vision_msgs.msg.Detection2DArray
+ :return: an OpenDR BoundingBoxList
+ :rtype: engine.target.BoundingBoxList
+ """
+ ros_boxes = ros_detections.detections
+ bboxes = BoundingBoxList(boxes=[])
+
+ for idx, box in enumerate(ros_boxes):
+ width = box.bbox.size_x
+ height = box.bbox.size_y
+ left = box.bbox.center.x - width / 2.
+ top = box.bbox.center.y - height / 2.
+ _id = int(float(box.results[0].id.strip('][').split(', ')[0]))
+ bbox = BoundingBox(top=top, left=left, width=width, height=height, name=_id)
+ bboxes.data.append(bbox)
+ return bboxes
+
+ def to_ros_bounding_box_list(self, bounding_box_list):
+ """
+ Converts an OpenDR bounding_box_list into a Detection2DArray msg that can carry the same information
+ The object class is also embedded on each bounding box (stored in ObjectHypothesisWithPose).
+ :param bounding_box_list: OpenDR bounding_box_list to be converted
+ :type bounding_box_list: engine.target.BoundingBoxList
+ :return: ROS2 message with the bounding box list
+ :rtype: vision_msgs.msg.Detection2DArray
+ """
+ detections = Detection2DArray()
+ for bounding_box in bounding_box_list:
+ detection = Detection2D()
+ detection.bbox = BoundingBox2D()
+ detection.results.append(ObjectHypothesisWithPose())
+ detection.bbox.center = Pose2D()
+ detection.bbox.center.x = bounding_box.left + bounding_box.width / 2.0
+ detection.bbox.center.y = bounding_box.top + bounding_box.height / 2.0
+ detection.bbox.size_x = float(bounding_box.width)
+ detection.bbox.size_y = float(bounding_box.height)
+ detection.results[0].id = str(bounding_box.name)
+ detection.results[0].score = float(bounding_box.confidence)
+ detections.detections.append(detection)
+ return detections
+
+ def from_ros_bounding_box_list(self, ros_detection_2d_array):
+ """
+ Converts a ROS2 message with bounding box list payload into an OpenDR pose
+ :param ros_detection_2d_array: the bounding boxes to be converted (represented as
+ vision_msgs.msg.Detection2DArray)
+ :type ros_detection_2d_array: vision_msgs.msg.Detection2DArray
+ :return: an OpenDR bounding box list
+ :rtype: engine.target.BoundingBoxList
+ """
+ detections = ros_detection_2d_array.detections
+ boxes = []
+
+ for detection in detections:
+ width = detection.bbox.size_x
+ height = detection.bbox.size_y
+ left = detection.bbox.center.x - width / 2.0
+ top = detection.bbox.center.y - height / 2.0
+ name = detection.results[0].id
+ score = detection.results[0].confidence
+ boxes.append(BoundingBox(name, left, top, width, height, score))
+ bounding_box_list = BoundingBoxList(boxes)
+ return bounding_box_list
+
+ def from_ros_single_tracking_annotation(self, ros_detection_box):
+ """
+ Converts a pair of ROS messages with bounding boxes and tracking ids into an OpenDR TrackingAnnotationList
+ :param ros_detection_box: The boxes to be converted.
+ :type ros_detection_box: vision_msgs.msg.Detection2D
+ :return: An OpenDR TrackingAnnotationList
+ :rtype: engine.target.TrackingAnnotationList
+ """
+ width = ros_detection_box.bbox.size_x
+ height = ros_detection_box.bbox.size_y
+ left = ros_detection_box.bbox.center.x - width / 2.
+ top = ros_detection_box.bbox.center.y - height / 2.
+ id = 0
+ bbox = TrackingAnnotation(
+ name=id,
+ left=left,
+ top=top,
+ width=width,
+ height=height,
+ id=0,
+ frame=-1
+ )
+ return bbox
+
+ def to_ros_single_tracking_annotation(self, tracking_annotation):
+ """
+ Converts a pair of ROS messages with bounding boxes and tracking ids into an OpenDR TrackingAnnotationList
+ :param tracking_annotation: The box to be converted.
+ :type tracking_annotation: engine.target.TrackingAnnotation
+ :return: A ROS vision_msgs.msg.Detection2D
+ :rtype: vision_msgs.msg.Detection2D
+ """
+ ros_box = Detection2D()
+ ros_box.bbox = BoundingBox2D()
+ ros_box.results.append(ObjectHypothesisWithPose())
+ ros_box.bbox.center = Pose2D()
+ ros_box.bbox.center.x = tracking_annotation.left + tracking_annotation.width / 2.0
+ ros_box.bbox.center.y = tracking_annotation.top + tracking_annotation.height / 2.0
+ ros_box.bbox.size_x = float(tracking_annotation.width)
+ ros_box.bbox.size_y = float(tracking_annotation.height)
+ ros_box.results[0].id = str(tracking_annotation.name)
+ ros_box.results[0].score = float(-1)
+ return ros_box
+
+ def to_ros_face(self, category):
+ """
+ Converts an OpenDR category into a ObjectHypothesis msg that can carry the Category.data and
+ Category.confidence.
+ :param category: OpenDR category to be converted
+ :type category: engine.target.Category
+ :return: ROS2 message with the category.data and category.confidence
+ :rtype: vision_msgs.msg.ObjectHypothesis
+ """
+ result = ObjectHypothesisWithPose()
+ result.id = str(category.data)
+ result.score = category.confidence
+ return result
+
+ def from_ros_face(self, ros_hypothesis):
+ """
+ Converts a ROS2 message with category payload into an OpenDR category
+ :param ros_hypothesis: the object hypothesis to be converted
+ :type ros_hypothesis: vision_msgs.msg.ObjectHypothesis
+ :return: an OpenDR category
+ :rtype: engine.target.Category
+ """
+ return Category(prediction=ros_hypothesis.id, description=None,
+ confidence=ros_hypothesis.score)
+
+ def to_ros_face_id(self, category):
+ """
+ Converts an OpenDR category into a string msg that can carry the Category.description.
+ :param category: OpenDR category to be converted
+ :type category: engine.target.Category
+ :return: ROS2 message with the category.description
+ :rtype: std_msgs.msg.String
+ """
+ result = String()
+ result.data = category.description
+ return result
+
+ def from_ros_point_cloud(self, point_cloud: PointCloudMsg):
+ """
+ Converts a ROS PointCloud message into an OpenDR PointCloud
+ :param point_cloud: ROS PointCloud to be converted
+ :type point_cloud: sensor_msgs.msg.PointCloud
+ :return: OpenDR PointCloud
+ :rtype: engine.data.PointCloud
+ """
+
+ points = np.empty([len(point_cloud.points), 3 + len(point_cloud.channels)], dtype=np.float32)
+
+ for i in range(len(point_cloud.points)):
+ point = point_cloud.points[i]
+ x, y, z = point.x, point.y, point.z
+
+ points[i, 0] = x
+ points[i, 1] = y
+ points[i, 2] = z
+
+ for q in range(len(point_cloud.channels)):
+ points[i, 3 + q] = point_cloud.channels[q].values[i]
+
+ result = PointCloud(points)
+
+ return result
+
+ def to_ros_point_cloud(self, point_cloud, time_stamp):
+ """
+ Converts an OpenDR PointCloud message into a ROS2 PointCloud
+ :param point_cloud: OpenDR PointCloud
+ :type point_cloud: engine.data.PointCloud
+ :param time_stamp: Time stamp
+ :type time_stamp: ROS Time
+ :return: ROS PointCloud
+ :rtype: sensor_msgs.msg.PointCloud
+ """
+
+ ros_point_cloud = PointCloudMsg()
+
+ header = Header()
+
+ header.stamp = time_stamp
+ ros_point_cloud.header = header
+
+ channels_count = point_cloud.data.shape[-1] - 3
+
+ channels = [ChannelFloat32Msg(name="channel_" + str(i), values=[]) for i in range(channels_count)]
+ points = []
+
+ for point in point_cloud.data:
+ point_msg = Point32Msg()
+ point_msg.x = float(point[0])
+ point_msg.y = float(point[1])
+ point_msg.z = float(point[2])
+ points.append(point_msg)
+ for i in range(channels_count):
+ channels[i].values.append(float(point[3 + i]))
+
+ ros_point_cloud.points = points
+ ros_point_cloud.channels = channels
+
+ return ros_point_cloud
+
+ def from_ros_boxes_3d(self, ros_boxes_3d):
+ """
+ Converts a ROS2 Detection3DArray message into an OpenDR BoundingBox3D object.
+ :param ros_boxes_3d: The ROS boxes to be converted.
+ :type ros_boxes_3d: vision_msgs.msg.Detection3DArray
+ :return: An OpenDR BoundingBox3DList object.
+ :rtype: engine.target.BoundingBox3DList
+ """
+ boxes = []
+
+ for ros_box in ros_boxes_3d:
+
+ box = BoundingBox3D(
+ name=ros_box.results[0].id,
+ truncated=0,
+ occluded=0,
+ bbox2d=None,
+ dimensions=np.array([
+ ros_box.bbox.size.position.x,
+ ros_box.bbox.size.position.y,
+ ros_box.bbox.size.position.z,
+ ]),
+ location=np.array([
+ ros_box.bbox.center.position.x,
+ ros_box.bbox.center.position.y,
+ ros_box.bbox.center.position.z,
+ ]),
+ rotation_y=ros_box.bbox.center.rotation.y,
+ score=ros_box.results[0].score,
+ )
+ boxes.append(box)
+
+ result = BoundingBox3DList(boxes)
+ return result
+
+ def to_ros_boxes_3d(self, boxes_3d):
+ """
+ Converts an OpenDR BoundingBox3DList object into a ROS2 Detection3DArray message.
+ :param boxes_3d: The OpenDR boxes to be converted.
+ :type boxes_3d: engine.target.BoundingBox3DList
+ :return: ROS message with the boxes
+ :rtype: vision_msgs.msg.Detection3DArray
+ """
+ ros_boxes_3d = Detection3DArray()
+ for i in range(len(boxes_3d)):
+ box = Detection3D()
+ box.bbox = BoundingBox3DMsg()
+ box.results.append(ObjectHypothesisWithPose())
+ box.bbox.center = Pose3D()
+ box.bbox.center.position.x = float(boxes_3d[i].location[0])
+ box.bbox.center.position.y = float(boxes_3d[i].location[1])
+ box.bbox.center.position.z = float(boxes_3d[i].location[2])
+ box.bbox.center.orientation = QuaternionMsg(x=0.0, y=float(boxes_3d[i].rotation_y), z=0.0, w=0.0)
+ box.bbox.size.x = float(boxes_3d[i].dimensions[0])
+ box.bbox.size.y = float(boxes_3d[i].dimensions[1])
+ box.bbox.size.z = float(boxes_3d[i].dimensions[2])
+ box.results[0].id = boxes_3d[i].name
+ box.results[0].score = float(boxes_3d[i].confidence)
+ ros_boxes_3d.detections.append(box)
+ return ros_boxes_3d
+
+ def from_ros_mesh(self, mesh_ROS):
+ """
+ Converts a ROS mesh into arrays of vertices and faces of a mesh
+ :param mesh_ROS: the ROS mesh to be converted
+ :type mesh_ROS: shape_msgs.msg.Mesh
+ :return vertices: Numpy array Nx3 representing vertices of the 3D model respectively
+ :rtype vertices: np.array
+ :return faces: Numpy array Nx3 representing the IDs of the vertices of each face of the 3D model
+ :rtype faces: numpy array (Nx3)
+ """
+ vertices = np.zeros([len(mesh_ROS.vertices), 3])
+ faces = np.zeros([len(mesh_ROS.triangles), 3]).astype(int)
+ for i in range(len(mesh_ROS.vertices)):
+ vertices[i] = np.array([mesh_ROS.vertices[i].x, mesh_ROS.vertices[i].y, mesh_ROS.vertices[i].z])
+ for i in range(len(mesh_ROS.triangles)):
+ faces[i] = np.array([int(mesh_ROS.triangles[i].vertex_indices[0]), int(mesh_ROS.triangles[i].vertex_indices[1]),
+ int(mesh_ROS.triangles[i].vertex_indices[2])]).astype(int)
+ return vertices, faces
+
+ def to_ros_mesh(self, vertices, faces):
+ """
+ Converts a mesh into a ROS Mesh
+ :param vertices: the vertices of the 3D model
+ :type vertices: numpy array (Nx3)
+ :param faces: the faces of the 3D model
+ :type faces: numpy array (Nx3)
+ :return mesh_ROS: a ROS mesh
+ :rtype mesh_ROS: shape_msgs.msg.Mesh
+ """
+ mesh_ROS = Mesh()
+ for i in range(vertices.shape[0]):
+ point = Point()
+ point.x = vertices[i, 0]
+ point.y = vertices[i, 1]
+ point.z = vertices[i, 2]
+ mesh_ROS.vertices.append(point)
+ for i in range(faces.shape[0]):
+ mesh_triangle = MeshTriangle()
+ mesh_triangle.vertex_indices[0] = int(faces[i][0])
+ mesh_triangle.vertex_indices[1] = int(faces[i][1])
+ mesh_triangle.vertex_indices[2] = int(faces[i][2])
+ mesh_ROS.triangles.append(mesh_triangle)
+ return mesh_ROS
+
+ def from_ros_colors(self, ros_colors):
+ """
+ Converts a list of ROS colors into a list of colors
+ :param ros_colors: a list of the colors of the vertices
+ :type ros_colors: std_msgs.msg.ColorRGBA[]
+ :return colors: the colors of the vertices of the 3D model
+ :rtype colors: numpy array (Nx3)
+ """
+ colors = np.zeros([len(ros_colors), 3])
+ for i in range(len(ros_colors)):
+ colors[i] = np.array([ros_colors[i].r, ros_colors[i].g, ros_colors[i].b])
+ return colors
+
+ def to_ros_colors(self, colors):
+ """
+ Converts an array of vertex_colors to a list of ROS colors
+ :param colors: a numpy array of RGB colors
+ :type colors: numpy array (Nx3)
+ :return ros_colors: a list of the colors of the vertices
+ :rtype ros_colors: std_msgs.msg.ColorRGBA[]
+ """
+ ros_colors = []
+ for i in range(colors.shape[0]):
+ color = ColorRGBA()
+ color.r = colors[i, 0]
+ color.g = colors[i, 1]
+ color.b = colors[i, 2]
+ color.a = 0.0
+ ros_colors.append(color)
+ return ros_colors
+
+ def from_ros_pose_3D(self, ros_pose):
+ """
+ Converts a ROS message with pose payload into an OpenDR pose
+ :param ros_pose: the pose to be converted (represented as opendr_interface.msg.OpenDRPose3D)
+ :type ros_pose: opendr_interface.msg.OpenDRPose3D
+ :return: an OpenDR pose
+ :rtype: engine.target.Pose
+ """
+ keypoints = ros_pose.keypoint_list
+ data = []
+ for i, keypoint in enumerate(keypoints):
+ data.append([keypoint.x, keypoint.y, keypoint.z])
+ pose = Pose(data, 1.0)
+ pose.id = 0
+ return pose
+
+ def to_ros_pose_3D(self, pose):
+ """
+ Converts an OpenDR pose into a OpenDRPose3D msg that can carry the same information
+ Each keypoint is represented as an OpenDRPose3DKeypoint with x, y, z coordinates.
+ :param pose: OpenDR pose to be converted
+ :type pose: engine.target.Pose
+ :return: ROS message with the pose
+ :rtype: opendr_interface.msg.OpenDRPose3D
+ """
+ data = pose.data
+ ros_pose = OpenDRPose3D()
+ ros_pose.pose_id = 0
+ if pose.id is not None:
+ ros_pose.pose_id = int(pose.id)
+ ros_pose.conf = 1.0
+ for i in range(len(data)):
+ keypoint = OpenDRPose3DKeypoint()
+ keypoint.kpt_name = ''
+ keypoint.x = float(data[i][0])
+ keypoint.y = float(data[i][1])
+ keypoint.z = float(data[i][2])
+ ros_pose.keypoint_list.append(keypoint)
+ return ros_pose
+
+ def to_ros_category(self, category):
+ """
+ Converts an OpenDR category into a ObjectHypothesis msg that can carry the Category.data and Category.confidence.
+ :param category: OpenDR category to be converted
+ :type category: engine.target.Category
+ :return: ROS message with the category.data and category.confidence
+ :rtype: vision_msgs.msg.ObjectHypothesis
+ """
+ result = ObjectHypothesis()
+ result.id = str(category.data)
+ result.score = float(category.confidence)
+ return result
+
+ def from_ros_category(self, ros_hypothesis):
+ """
+ Converts a ROS message with category payload into an OpenDR category
+ :param ros_hypothesis: the object hypothesis to be converted
+ :type ros_hypothesis: vision_msgs.msg.ObjectHypothesis
+ :return: an OpenDR category
+ :rtype: engine.target.Category
+ """
+ category = Category(prediction=ros_hypothesis.id, description=None,
+ confidence=ros_hypothesis.score)
+ return category
+
+ def to_ros_category_description(self, category):
+ """
+ Converts an OpenDR category into a string msg that can carry the Category.description.
+ :param category: OpenDR category to be converted
+ :type category: engine.target.Category
+ :return: ROS message with the category.description
+ :rtype: std_msgs.msg.String
+ """
+ result = String()
+ result.data = category.description
+ return result
+
+ def from_rosarray_to_timeseries(self, ros_array, dim1, dim2):
+ """
+ Converts ROS2 array into OpenDR Timeseries object
+ :param ros_array: data to be converted
+ :type ros_array: std_msgs.msg.Float32MultiArray
+ :param dim1: 1st dimension
+ :type dim1: int
+ :param dim2: 2nd dimension
+ :type dim2: int
+ :rtype: engine.data.Timeseries
+ """
+ data = np.reshape(ros_array.data, (dim1, dim2))
+ data = Timeseries(data)
+ return data
+
+ def from_ros_image_to_depth(self, message, encoding='mono16'):
+ """
+ Converts a ROS2 image message into an OpenDR grayscale depth image
+ :param message: ROS2 image to be converted
+ :type message: sensor_msgs.msg.Image
+ :param encoding: encoding to be used for the conversion
+ :type encoding: str
+ :return: OpenDR image
+ :rtype: engine.data.Image
+ """
+ cv_image = self._cv_bridge.imgmsg_to_cv2(message, desired_encoding=encoding)
+ cv_image = np.expand_dims(cv_image, axis=-1)
+ image = Image(np.asarray(cv_image, dtype=np.uint8))
+ return image
+
+ def from_category_to_rosclass(self, prediction, timestamp, source_data=None):
+ """
+ Converts OpenDR Category into Classification2D message with class label, confidence, timestamp and corresponding input
+ :param prediction: classification prediction
+ :type prediction: engine.target.Category
+ :param timestamp: time stamp for header message
+ :type timestamp: str
+ :param source_data: corresponding input or None
+ :return classification
+ :rtype: vision_msgs.msg.Classification2D
+ """
+ classification = Classification2D()
+ classification.header = Header()
+ classification.header.stamp = timestamp
+
+ result = ObjectHypothesis()
+ result.id = str(prediction.data)
+ result.score = prediction.confidence
+ classification.results.append(result)
+ if source_data is not None:
+ classification.source_img = source_data
+ return classification
diff --git a/projects/opendr_ws_2/src/opendr_bridge/package.xml b/projects/opendr_ws_2/src/opendr_bridge/package.xml
new file mode 100644
index 0000000000..290546ab5d
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/package.xml
@@ -0,0 +1,21 @@
+
+
+
+ opendr_bridge
+ 2.0.0
+ OpenDR ROS2 bridge package. This package provides a way to translate ROS2 messages into OpenDR data types
+ and vice versa.
+ OpenDR Project Coordinator
+ Apache License v2.0
+
+ rclpy
+
+ ament_copyright
+ ament_flake8
+ ament_pep257
+ python3-pytest
+
+
+ ament_python
+
+
diff --git a/projects/control/eagerx/demos/__init__.py b/projects/opendr_ws_2/src/opendr_bridge/resource/opendr_bridge
similarity index 100%
rename from projects/control/eagerx/demos/__init__.py
rename to projects/opendr_ws_2/src/opendr_bridge/resource/opendr_bridge
diff --git a/projects/opendr_ws_2/src/opendr_bridge/setup.cfg b/projects/opendr_ws_2/src/opendr_bridge/setup.cfg
new file mode 100644
index 0000000000..9d9e5c012f
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/setup.cfg
@@ -0,0 +1,4 @@
+[develop]
+script_dir=$base/lib/opendr_bridge
+[install]
+install_scripts=$base/lib/opendr_bridge
diff --git a/projects/opendr_ws_2/src/opendr_bridge/setup.py b/projects/opendr_ws_2/src/opendr_bridge/setup.py
new file mode 100644
index 0000000000..df933edd8b
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/setup.py
@@ -0,0 +1,26 @@
+from setuptools import setup
+
+package_name = 'opendr_bridge'
+
+setup(
+ name=package_name,
+ version='2.0.0',
+ packages=[package_name],
+ data_files=[
+ ('share/ament_index/resource_index/packages',
+ ['resource/' + package_name]),
+ ('share/' + package_name, ['package.xml']),
+ ],
+ install_requires=['setuptools'],
+ zip_safe=True,
+ maintainer='OpenDR Project Coordinator',
+ maintainer_email='tefas@csd.auth.gr',
+ description='OpenDR ROS2 bridge package. This package provides a way to translate ROS2 messages into OpenDR' +
+ 'data types and vice versa.',
+ license='Apache License v2.0',
+ tests_require=['pytest'],
+ entry_points={
+ 'console_scripts': [
+ ],
+ },
+)
diff --git a/projects/opendr_ws_2/src/opendr_bridge/test/test_copyright.py b/projects/opendr_ws_2/src/opendr_bridge/test/test_copyright.py
new file mode 100644
index 0000000000..cc8ff03f79
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/test/test_copyright.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_copyright.main import main
+import pytest
+
+
+@pytest.mark.copyright
+@pytest.mark.linter
+def test_copyright():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found errors'
diff --git a/projects/opendr_ws_2/src/opendr_bridge/test/test_flake8.py b/projects/opendr_ws_2/src/opendr_bridge/test/test_flake8.py
new file mode 100644
index 0000000000..27ee1078ff
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/test/test_flake8.py
@@ -0,0 +1,25 @@
+# Copyright 2017 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_flake8.main import main_with_errors
+import pytest
+
+
+@pytest.mark.flake8
+@pytest.mark.linter
+def test_flake8():
+ rc, errors = main_with_errors(argv=[])
+ assert rc == 0, \
+ 'Found %d code style errors / warnings:\n' % len(errors) + \
+ '\n'.join(errors)
diff --git a/projects/opendr_ws_2/src/opendr_bridge/test/test_pep257.py b/projects/opendr_ws_2/src/opendr_bridge/test/test_pep257.py
new file mode 100644
index 0000000000..b234a3840f
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_bridge/test/test_pep257.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_pep257.main import main
+import pytest
+
+
+@pytest.mark.linter
+@pytest.mark.pep257
+def test_pep257():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found code style errors / warnings'
diff --git a/projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/__init__.py b/projects/opendr_ws_2/src/opendr_data_generation/opendr_data_generation/__init__.py
similarity index 100%
rename from projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/__init__.py
rename to projects/opendr_ws_2/src/opendr_data_generation/opendr_data_generation/__init__.py
diff --git a/projects/opendr_ws_2/src/opendr_data_generation/opendr_data_generation/synthetic_facial_generation_node.py b/projects/opendr_ws_2/src/opendr_data_generation/opendr_data_generation/synthetic_facial_generation_node.py
new file mode 100644
index 0000000000..094a20cfd8
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_data_generation/opendr_data_generation/synthetic_facial_generation_node.py
@@ -0,0 +1,173 @@
+#!/usr/bin/env python3.6
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import os
+import argparse
+import numpy as np
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from cv_bridge import CvBridge
+
+from opendr.projects.python.simulation.synthetic_multi_view_facial_image_generation.algorithm.DDFA.utils.ddfa \
+ import str2bool
+from opendr.src.opendr.engine.data import Image
+from opendr.projects.python.simulation.synthetic_multi_view_facial_image_generation.SyntheticDataGeneration \
+ import MultiviewDataGeneration
+
+
+class SyntheticDataGeneratorNode(Node):
+
+ def __init__(self, args, input_rgb_image_topic="/image_raw",
+ output_rgb_image_topic="/opendr/synthetic_facial_images"):
+ """
+ Creates a ROS Node for SyntheticDataGeneration
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the synthetic facial image (if None, no image
+ is published)
+ :type output_rgb_image_topic: str
+ """
+ super().__init__('synthetic_facial_image_generation_node')
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 10)
+ self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+ self._cv_bridge = CvBridge()
+ self.ID = 0
+ self.args = args
+ self.path_in = args.path_in
+ self.key = str(args.path_3ddfa + "/example/Images/")
+ self.key1 = str(args.path_3ddfa + "/example/")
+ self.key2 = str(args.path_3ddfa + "/results/")
+ self.save_path = args.save_path
+ self.val_yaw = args.val_yaw
+ self.val_pitch = args.val_pitch
+ self.device = args.device
+
+ # Initialize the SyntheticDataGeneration
+ self.synthetic = MultiviewDataGeneration(self.args)
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+
+ cv_image = self._cv_bridge.imgmsg_to_cv2(data, desired_encoding="rgb8")
+ image = Image(np.asarray(cv_image, dtype=np.uint8))
+ self.ID = self.ID + 1
+ # Get an OpenCV image back
+ image = cv2.cvtColor(image.opencv(), cv2.COLOR_RGBA2BGR)
+ name = str(f"{self.ID:02d}" + "_single.jpg")
+ cv2.imwrite(os.path.join(self.path_in, name), image)
+
+ if self.ID == 10:
+ # Run SyntheticDataGeneration
+ self.synthetic.eval()
+ self.ID = 0
+ # Annotate image and publish results
+ current_directory_path = os.path.join(self.save_path, str("/Documents_orig/"))
+ for file in os.listdir(current_directory_path):
+ name, ext = os.path.splitext(file)
+ if ext == ".jpg":
+ image_file_savepath = os.path.join(current_directory_path, file)
+ cv_image = cv2.imread(image_file_savepath)
+ cv_image = cv2.cvtColor(cv_image, cv2.COLOR_BGR2RGB)
+ if self.image_publisher is not None:
+ image = Image(np.array(cv_image, dtype=np.uint8))
+ message = self.bridge.to_ros_image(image, encoding="rgb8")
+ self.image_publisher.publish(message)
+ for f in os.listdir(self.path_in):
+ os.remove(os.path.join(self.path_in, f))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=str, default="/opendr/synthetic_facial_images")
+ parser.add_argument("--path_in", default=os.path.join("opendr", "projects",
+ "data_generation",
+ "synthetic_multi_view_facial_image_generation",
+ "demos", "imgs_input"),
+ type=str, help='Give the path of image folder')
+ parser.add_argument('--path_3ddfa', default=os.path.join("opendr", "projects",
+ "data_generation",
+ "synthetic_multi_view_facial_image_generation",
+ "algorithm", "DDFA"),
+ type=str, help='Give the path of DDFA folder')
+ parser.add_argument('--save_path', default=os.path.join("opendr", "projects",
+ "data_generation",
+ "synthetic_multi_view_facial_image_generation",
+ "results"),
+ type=str, help='Give the path of results folder')
+ parser.add_argument('--val_yaw', default="10 20", nargs='+', type=str, help='yaw poses list between [-90,90]')
+ parser.add_argument('--val_pitch', default="30 40", nargs='+', type=str, help='pitch poses list between [-90,90]')
+ parser.add_argument("--device", default="cuda", type=str, help="choose between cuda or cpu ")
+ parser.add_argument('-f', '--files', nargs='+',
+ help='image files paths fed into network, single or multiple images')
+ parser.add_argument('--show_flg', default='false', type=str2bool, help='whether show the visualization result')
+ parser.add_argument('--dump_res', default='true', type=str2bool,
+ help='whether write out the visualization image')
+ parser.add_argument('--dump_vertex', default='false', type=str2bool,
+ help='whether write out the dense face vertices to mat')
+ parser.add_argument('--dump_ply', default='true', type=str2bool)
+ parser.add_argument('--dump_pts', default='true', type=str2bool)
+ parser.add_argument('--dump_roi_box', default='false', type=str2bool)
+ parser.add_argument('--dump_pose', default='true', type=str2bool)
+ parser.add_argument('--dump_depth', default='true', type=str2bool)
+ parser.add_argument('--dump_pncc', default='true', type=str2bool)
+ parser.add_argument('--dump_paf', default='true', type=str2bool)
+ parser.add_argument('--paf_size', default=3, type=int, help='PAF feature kernel size')
+ parser.add_argument('--dump_obj', default='true', type=str2bool)
+ parser.add_argument('--dlib_bbox', default='true', type=str2bool, help='whether use dlib to predict bbox')
+ parser.add_argument('--dlib_landmark', default='true', type=str2bool,
+ help='whether use dlib landmark to crop image')
+ parser.add_argument('-m', '--mode', default='gpu', type=str, help='gpu or cpu mode')
+ parser.add_argument('--bbox_init', default='two', type=str,
+ help='one|two: one-step bbox initialization or two-step')
+ parser.add_argument('--dump_2d_img', default='true', type=str2bool, help='whether to save 3d rendered image')
+ parser.add_argument('--dump_param', default='true', type=str2bool, help='whether to save param')
+ parser.add_argument('--dump_lmk', default='true', type=str2bool, help='whether to save landmarks')
+ parser.add_argument('--save_dir', default='./algorithm/DDFA/results', type=str, help='dir to save result')
+ parser.add_argument('--save_lmk_dir', default='./example', type=str, help='dir to save landmark result')
+ parser.add_argument('--img_list', default='./txt_name_batch.txt', type=str, help='test image list file')
+ parser.add_argument('--rank', default=0, type=int, help='used when parallel run')
+ parser.add_argument('--world_size', default=1, type=int, help='used when parallel run')
+ parser.add_argument('--resume_idx', default=0, type=int)
+ args = parser.parse_args()
+
+ synthetic_data_generation_node = SyntheticDataGeneratorNode(args=args,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic)
+
+ rclpy.spin(synthetic_data_generation_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ synthetic_data_generation_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_data_generation/package.xml b/projects/opendr_ws_2/src/opendr_data_generation/package.xml
new file mode 100644
index 0000000000..e6f73e51d2
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_data_generation/package.xml
@@ -0,0 +1,25 @@
+
+
+
+ opendr_data_generation
+ 2.0.0
+ OpenDR's ROS2 nodes for data generation package
+ tefas
+ Apache License v2.0
+
+ sensor_msgs
+
+ rclpy
+ opendr_bridge
+
+ ament_cmake
+
+ ament_copyright
+ ament_flake8
+ ament_pep257
+ python3-pytest
+
+
+ ament_python
+
+
diff --git a/projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/utils/__init__.py b/projects/opendr_ws_2/src/opendr_data_generation/resource/opendr_data_generation
similarity index 100%
rename from projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/utils/__init__.py
rename to projects/opendr_ws_2/src/opendr_data_generation/resource/opendr_data_generation
diff --git a/projects/opendr_ws_2/src/opendr_data_generation/setup.cfg b/projects/opendr_ws_2/src/opendr_data_generation/setup.cfg
new file mode 100644
index 0000000000..893b4dda07
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_data_generation/setup.cfg
@@ -0,0 +1,4 @@
+[develop]
+script_dir=$base/lib/opendr_data_generation
+[install]
+install_scripts=$base/lib/opendr_data_generation
diff --git a/projects/opendr_ws_2/src/opendr_data_generation/setup.py b/projects/opendr_ws_2/src/opendr_data_generation/setup.py
new file mode 100644
index 0000000000..0735f378c8
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_data_generation/setup.py
@@ -0,0 +1,40 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from setuptools import setup
+
+package_name = 'opendr_data_generation'
+
+setup(
+ name=package_name,
+ version='2.0.0',
+ packages=[package_name],
+ data_files=[
+ ('share/ament_index/resource_index/packages',
+ ['resource/' + package_name]),
+ ('share/' + package_name, ['package.xml']),
+ ],
+ install_requires=['setuptools'],
+ zip_safe=True,
+ maintainer='OpenDR Project Coordinator',
+ maintainer_email='tefas@csd.auth.gr',
+ description='OpenDR\'s ROS2 nodes for data generation package',
+ license='Apache License v2.0',
+ tests_require=['pytest'],
+ entry_points={
+ 'console_scripts': [
+ 'synthetic_facial_generation = opendr_data_generation.synthetic_facial_generation_node:main'
+ ],
+ },
+)
diff --git a/projects/opendr_ws_2/src/opendr_data_generation/test/test_copyright.py b/projects/opendr_ws_2/src/opendr_data_generation/test/test_copyright.py
new file mode 100644
index 0000000000..cc8ff03f79
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_data_generation/test/test_copyright.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_copyright.main import main
+import pytest
+
+
+@pytest.mark.copyright
+@pytest.mark.linter
+def test_copyright():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found errors'
diff --git a/projects/opendr_ws_2/src/opendr_data_generation/test/test_flake8.py b/projects/opendr_ws_2/src/opendr_data_generation/test/test_flake8.py
new file mode 100644
index 0000000000..18bd9331ea
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_data_generation/test/test_flake8.py
@@ -0,0 +1,25 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_flake8.main import main_with_errors
+import pytest
+
+
+@pytest.mark.flake8
+@pytest.mark.linter
+def test_flake8():
+ rc, errors = main_with_errors(argv=[])
+ assert rc == 0, \
+ 'Found %d code style errors / warnings:\n' % len(errors) + \
+ '\n'.join(errors)
diff --git a/projects/opendr_ws_2/src/opendr_data_generation/test/test_pep257.py b/projects/opendr_ws_2/src/opendr_data_generation/test/test_pep257.py
new file mode 100644
index 0000000000..b234a3840f
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_data_generation/test/test_pep257.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_pep257.main import main
+import pytest
+
+
+@pytest.mark.linter
+@pytest.mark.pep257
+def test_pep257():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found code style errors / warnings'
diff --git a/projects/opendr_ws_2/src/opendr_interface/CMakeLists.txt b/projects/opendr_ws_2/src/opendr_interface/CMakeLists.txt
new file mode 100644
index 0000000000..9c158812e5
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/CMakeLists.txt
@@ -0,0 +1,50 @@
+cmake_minimum_required(VERSION 3.5)
+project(opendr_interface)
+
+# Default to C99
+if(NOT CMAKE_C_STANDARD)
+ set(CMAKE_C_STANDARD 99)
+endif()
+
+# Default to C++14
+if(NOT CMAKE_CXX_STANDARD)
+ set(CMAKE_CXX_STANDARD 14)
+endif()
+
+if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
+ add_compile_options(-Wall -Wextra -Wpedantic)
+endif()
+
+# find dependencies
+find_package(ament_cmake REQUIRED)
+# uncomment the following section in order to fill in
+# further dependencies manually.
+# find_package( REQUIRED)
+find_package(std_msgs REQUIRED)
+find_package(shape_msgs REQUIRED)
+find_package(sensor_msgs REQUIRED)
+find_package(vision_msgs REQUIRED)
+find_package(rosidl_default_generators REQUIRED)
+
+rosidl_generate_interfaces(${PROJECT_NAME}
+ "msg/OpenDRPose2D.msg"
+ "msg/OpenDRPose2DKeypoint.msg"
+ "msg/OpenDRPose3D.msg"
+ "msg/OpenDRPose3DKeypoint.msg"
+ "srv/OpenDRSingleObjectTracking.srv"
+ "srv/ImgToMesh.srv"
+ DEPENDENCIES std_msgs shape_msgs sensor_msgs vision_msgs
+)
+
+if(BUILD_TESTING)
+ find_package(ament_lint_auto REQUIRED)
+ # the following line skips the linter which checks for copyrights
+ # uncomment the line when a copyright and license is not present in all source files
+ #set(ament_cmake_copyright_FOUND TRUE)
+ # the following line skips cpplint (only works in a git repo)
+ # uncomment the line when this package is not in a git repo
+ #set(ament_cmake_cpplint_FOUND TRUE)
+ ament_lint_auto_find_test_dependencies()
+endif()
+
+ament_package()
diff --git a/projects/perception/lightweight_open_pose/jetbot/results/.keep b/projects/opendr_ws_2/src/opendr_interface/include/opendr_interface/.keep
similarity index 100%
rename from projects/perception/lightweight_open_pose/jetbot/results/.keep
rename to projects/opendr_ws_2/src/opendr_interface/include/opendr_interface/.keep
diff --git a/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose2D.msg b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose2D.msg
new file mode 100644
index 0000000000..184f3fd11b
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose2D.msg
@@ -0,0 +1,26 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This message represents a full OpenDR human pose 2D as a list of keypoints
+
+std_msgs/Header header
+
+# The id of the pose
+int32 pose_id
+
+# The pose detection confidence of the model
+float32 conf
+
+# A list of a human 2D pose keypoints
+OpenDRPose2DKeypoint[] keypoint_list
\ No newline at end of file
diff --git a/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose2DKeypoint.msg b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose2DKeypoint.msg
new file mode 100644
index 0000000000..72d14a19f2
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose2DKeypoint.msg
@@ -0,0 +1,22 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This message contains all relevant information for an OpenDR human pose 2D keypoint
+
+# The kpt_name according to https://github.com/opendr-eu/opendr/blob/master/docs/reference/lightweight-open-pose.md#notes
+string kpt_name
+
+# x and y pixel position on the input image, (0, 0) is top-left corner of image
+int32 x
+int32 y
diff --git a/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose3D.msg b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose3D.msg
new file mode 100644
index 0000000000..a180eed5b0
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose3D.msg
@@ -0,0 +1,26 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This message represents a full OpenDR human pose 3D as a list of keypoints
+
+std_msgs/Header header
+
+# The id of the pose
+int32 pose_id
+
+# The pose detection confidence of the model
+float32 conf
+
+# A list of a human 3D pose keypoints
+OpenDRPose3DKeypoint[] keypoint_list
diff --git a/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose3DKeypoint.msg b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose3DKeypoint.msg
new file mode 100644
index 0000000000..179aa9e348
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/msg/OpenDRPose3DKeypoint.msg
@@ -0,0 +1,22 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The kpt_name according to https://github.com/opendr-eu/opendr/blob/master/docs/reference/lightweight-open-pose.md#notes
+string kpt_name
+
+# This message contains all relevant information for an OpenDR human pose 3D keypoint
+
+float32 x
+float32 y
+float32 z
diff --git a/projects/opendr_ws_2/src/opendr_interface/package.xml b/projects/opendr_ws_2/src/opendr_interface/package.xml
new file mode 100644
index 0000000000..fdbd9c351e
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/package.xml
@@ -0,0 +1,24 @@
+
+
+
+ opendr_interface
+ 2.0.0
+ OpenDR ROS2 custom interface package. This package includes all custom OpenDR ROS2 messages and services.
+ OpenDR Project Coordinator
+ Apache License v2.0
+
+ ament_cmake
+
+ std_msgs
+ rosidl_default_generators
+
+ rosidl_default_runtime
+ rosidl_interface_packages
+
+ ament_lint_auto
+ ament_lint_common
+
+
+ ament_cmake
+
+
diff --git a/projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/utils/cython/__init__.py b/projects/opendr_ws_2/src/opendr_interface/src/.keep
similarity index 100%
rename from projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/DDFA/utils/cython/__init__.py
rename to projects/opendr_ws_2/src/opendr_interface/src/.keep
diff --git a/projects/opendr_ws_2/src/opendr_interface/srv/ImgToMesh.srv b/projects/opendr_ws_2/src/opendr_interface/srv/ImgToMesh.srv
new file mode 100644
index 0000000000..3d6d15717a
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/srv/ImgToMesh.srv
@@ -0,0 +1,21 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+sensor_msgs/Image img_rgb
+sensor_msgs/Image img_msk
+std_msgs/Bool extract_pose
+---
+shape_msgs/Mesh mesh
+std_msgs/ColorRGBA[] vertex_colors
+OpenDRPose3D pose
diff --git a/projects/opendr_ws_2/src/opendr_interface/srv/OpenDRSingleObjectTracking.srv b/projects/opendr_ws_2/src/opendr_interface/srv/OpenDRSingleObjectTracking.srv
new file mode 100644
index 0000000000..e7b3c29517
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_interface/srv/OpenDRSingleObjectTracking.srv
@@ -0,0 +1,17 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+vision_msgs/Detection2D init_box
+---
+bool success
diff --git a/projects/opendr_ws_2/src/opendr_perception/README.md b/projects/opendr_ws_2/src/opendr_perception/README.md
new file mode 100755
index 0000000000..1fce5f935d
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/README.md
@@ -0,0 +1,885 @@
+# OpenDR Perception Package
+
+This package contains ROS2 nodes related to the perception package of OpenDR.
+
+---
+
+## Prerequisites
+
+Before you can run any of the toolkit's ROS2 nodes, some prerequisites need to be fulfilled:
+1. First of all, you need to [set up the required packages and build your workspace.](../../README.md#first-time-setup)
+2. _(Optional for nodes with [RGB input](#rgb-input-nodes))_
+
+ For basic usage and testing, all the toolkit's ROS2 nodes that use RGB images are set up to expect input from a basic webcam using the default package `usb_cam` which is installed with OpenDR. You can run the webcam node in a new terminal:
+ ```shell
+ ros2 run usb_cam usb_cam_node_exe
+ ```
+ By default, the USB cam node publishes images on `/image_raw` and the RGB input nodes subscribe to this topic if not provided with an input topic argument.
+ As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing images, **make sure to change the input topic accordingly.**
+
+3. _(Optional for nodes with [audio input](#audio-input) or [audiovisual input](#rgb--audio-input))_
+
+ For basic usage and testing, the toolkit's ROS2 nodes that use audio as input are set up to expect input from a basic audio device using the default package `audio_common` which is installed with OpenDR. You can run the audio node in a new terminal:
+ ```shell
+ ros2 run audio_capture audio_capture_node
+ ```
+ By default, the audio capture node publishes audio data on `/audio` and the audio input nodes subscribe to this topic if not provided with an input topic argument.
+ As explained for each node below, you can modify the topics via arguments, so if you use any other node responsible for publishing audio, **make sure to change the input topic accordingly.**
+
+---
+
+## Notes
+
+- ### Display output images with rqt_image_view
+ For any node that outputs images, `rqt_image_view` can be used to display them by running the following command:
+ ```shell
+ ros2 run rqt_image_view rqt_image_view &
+ ```
+ A window will appear, where the topic that you want to view can be selected from the drop-down menu on the top-left area of the window.
+ Refer to each node's documentation below to find out the default output image topic, where applicable, and select it on the drop-down menu of rqt_image_view.
+
+- ### Echo node output
+ All OpenDR nodes publish some kind of detection message, which can be echoed by running the following command:
+ ```shell
+ ros2 topic echo /opendr/topic_name
+ ```
+ You can find out the default topic name for each node, in its documentation below.
+
+- ### Increase performance by disabling output
+ Optionally, nodes can be modified via command line arguments, which are presented for each node separately below.
+ Generally, arguments give the option to change the input and output topics, the device the node runs on (CPU or GPU), etc.
+ When a node publishes on several topics, where applicable, a user can opt to disable one or more of the outputs by providing `None` in the corresponding output topic.
+ This disables publishing on that topic, forgoing some operations in the node, which might increase its performance.
+
+ _An example would be to disable the output annotated image topic in a node when visualization is not needed and only use the detection message in another node, thus eliminating the OpenCV operations._
+
+- ### An example diagram of OpenDR nodes running
+ ![Face Detection ROS2 node running diagram](../../images/opendr_node_diagram.png)
+ - On the left, the `usb_cam` node can be seen, which is using a system camera to publish images on the `/image_raw` topic.
+ - In the middle, OpenDR's face detection node is running taking as input the published image. By default, the node has its input topic set to `/image_raw`.
+ - To the right the two output topics of the face detection node can be seen.
+ The bottom topic `/opendr/image_faces_annotated` is the annotated image which can be easily viewed with `rqt_image_view` as explained earlier.
+ The other topic `/opendr/faces` is the detection message which contains the detected faces' detailed information.
+ This message can be easily viewed by running `ros2 topic echo /opendr/faces` in a terminal.
+
+
+
+----
+
+## RGB input nodes
+
+### Pose Estimation ROS2 Node
+
+You can find the pose estimation ROS2 node python script [here](./opendr_perception/pose_estimation_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [pose estimation tool](../../../../src/opendr/perception/pose_estimation/lightweight_open_pose/lightweight_open_pose_learner.py) whose documentation can be found [here](../../../../docs/reference/lightweight-open-pose.md).
+The node publishes the detected poses in [OpenDR's 2D pose message format](../opendr_interface/msg/OpenDRPose2D.msg), which saves a list of [OpenDR's keypoint message format](../opendr_interface/msg/OpenDRPose2DKeypoint.msg).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the pose detection node:
+ ```shell
+ ros2 run opendr_perception pose_estimation
+ ```
+ The following optional arguments are available:
+ - `-h, --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)
+ - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
+
+3. Default output topics:
+ - Output images: `/opendr/image_pose_annotated`
+ - Detection messages: `/opendr/poses`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### High Resolution Pose Estimation ROS2 Node
+
+You can find the high resolution pose estimation ROS2 node python script [here](./opendr_perception/hr_pose_estimation_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [high resolution pose estimation tool](../../../../src/opendr/perception/pose_estimation/hr_pose_estimation/high_resolution_learner.py) whose documentation can be found [here](../../../../docs/reference/high-resolution-pose-estimation.md).
+The node publishes the detected poses in [OpenDR's 2D pose message format](../opendr_interface/msg/OpenDRPose2D.msg), which saves a list of [OpenDR's keypoint message format](../opendr_interface/msg/OpenDRPose2DKeypoint.msg).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the high resolution pose detection node:
+ ```shell
+ ros2 run opendr_perception hr_pose_estimation
+ ```
+ The following optional arguments are available:
+ - `-h, --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)
+ - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--accelerate`: Acceleration flag that causes pose estimation to run faster but with less accuracy
+
+3. Default output topics:
+ - Output images: `/opendr/image_pose_annotated`
+ - Detection messages: `/opendr/poses`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Fall Detection ROS2 Node
+
+You can find the fall detection ROS2 node python script [here](./opendr_perception/fall_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md).
+Fall detection uses the toolkit's pose estimation tool internally.
+
+
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the fall detection node:
+
+ ```shell
+ ros2 run opendr_perception fall_detection
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--accelerate`: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy
+
+3. Default output topics:
+ - Output images: `/opendr/image_fallen_annotated`
+ - Detection messages: `/opendr/fallen`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Face Detection ROS2 Node
+
+The face detection ROS2 node supports both the ResNet and MobileNet versions, the latter of which performs masked face detection as well.
+
+You can find the face detection ROS2 node python script [here](./opendr_perception/face_detection_retinaface_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [face detection tool](../../../../src/opendr/perception/object_detection_2d/retinaface/retinaface_learner.py) whose documentation can be found [here](../../../../docs/reference/face-detection-2d-retinaface.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the face detection node
+
+ ```shell
+ ros2 run opendr_perception face_detection_retinaface
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_faces_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/faces`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--backbone BACKBONE`: retinaface backbone, options are either `mnet` or `resnet`, where `mnet` detects masked faces as well (default=`resnet`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_faces_annotated`
+ - Detection messages: `/opendr/faces`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Face Recognition ROS2 Node
+
+You can find the face recognition ROS2 node python script [here](./opendr_perception/face_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [face recognition tool](../../../../src/opendr/perception/face_recognition/face_recognition_learner.py) whose documentation can be found [here](../../../../docs/reference/face-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the face recognition node:
+
+ ```shell
+ ros2 run opendr_perception face_recognition
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_face_reco_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition`)
+ - `-id or --detections_id_topic DETECTIONS_ID_TOPIC`: topic name for detection ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/face_recognition_id`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--backbone BACKBONE`: backbone network (default=`mobilefacenet`)
+ - `--dataset_path DATASET_PATH`: path of the directory where the images of the faces to be recognized are stored (default=`./database`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_face_reco_annotated`
+ - Detection messages: `/opendr/face_recognition` and `/opendr/face_recognition_id`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+Reference images should be placed in a defined structure like:
+- imgs
+ - ID1
+ - image1
+ - image2
+ - ID2
+ - ID3
+ - ...
+
+The default dataset path is `./database`. Please use the `--database_path ./your/path/` argument to define a custom one.
+Τhe name of the sub-folder, e.g. ID1, will be published under `/opendr/face_recognition_id`.
+
+The database entry and the returned confidence is published under the topic name `/opendr/face_recognition`, and the human-readable ID
+under `/opendr/face_recognition_id`.
+
+### 2D Object Detection ROS2 Nodes
+
+For 2D object detection, there are several ROS2 nodes implemented using various algorithms. The generic object detectors are SSD, YOLOv3, YOLOv5, CenterNet, Nanodet and DETR.
+
+You can find the 2D object detection ROS2 node python scripts here:
+[SSD node](./opendr_perception/object_detection_2d_ssd_node.py), [YOLOv3 node](./opendr_perception/object_detection_2d_yolov3_node.py), [YOLOv5 node](./opendr_perception/object_detection_2d_yolov5_node.py), [CenterNet node](./opendr_perception/object_detection_2d_centernet_node.py), [Nanodet node](./opendr_perception/object_detection_2d_nanodet_node.py) and [DETR node](./opendr_perception/object_detection_2d_detr_node.py),
+where you can inspect the code and modify it as you wish to fit your needs.
+The nodes makes use of the toolkit's various 2D object detection tools:
+[SSD tool](../../../../src/opendr/perception/object_detection_2d/ssd/ssd_learner.py), [YOLOv3 tool](../../../../src/opendr/perception/object_detection_2d/yolov3/yolov3_learner.py), [YOLOv5 tool](../../../../src/opendr/perception/object_detection_2d/yolov5/yolov5_learner.py),
+[CenterNet tool](../../../../src/opendr/perception/object_detection_2d/centernet/centernet_learner.py), [Nanodet tool](../../../../src/opendr/perception/object_detection_2d/nanodet/nanodet_learner.py), [DETR tool](../../../../src/opendr/perception/object_detection_2d/detr/detr_learner.py),
+whose documentation can be found here:
+[SSD docs](../../../../docs/reference/object-detection-2d-ssd.md), [YOLOv3 docs](../../../../docs/reference/object-detection-2d-yolov3.md), [YOLOv5 docs](../../../../docs/reference/object-detection-2d-yolov5.md),
+[CenterNet docs](../../../../docs/reference/object-detection-2d-centernet.md), [Nanodet docs](../../../../docs/reference/nanodet.md), [DETR docs](../../../../docs/reference/detr.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start a 2D object detector node:
+ 1. SSD node
+ ```shell
+ ros2 run opendr_perception object_detection_2d_ssd
+ ```
+ The following optional arguments are available for the SSD node:
+ - `--backbone BACKBONE`: Backbone network (default=`vgg16_atrous`)
+ - `--nms_type NMS_TYPE`: Non-Maximum Suppression type options are `default`, `seq2seq-nms`, `soft-nms`, `fast-nms`, `cluster-nms` (default=`default`)
+
+ 2. YOLOv3 node
+ ```shell
+ ros2 run opendr_perception object_detection_2d_yolov3
+ ```
+ The following optional argument is available for the YOLOv3 node:
+ - `--backbone BACKBONE`: Backbone network (default=`darknet53`)
+
+ 3. YOLOv5 node
+ ```shell
+ ros2 run opendr_perception object_detection_2d_yolov5
+ ```
+ The following optional argument is available for the YOLOv5 node:
+ - `--model_name MODEL_NAME`: Network architecture, options are `yolov5s`, `yolov5n`, `yolov5m`, `yolov5l`, `yolov5x`, `yolov5n6`, `yolov5s6`, `yolov5m6`, `yolov5l6`, `custom` (default=`yolov5s`)
+
+ 4. CenterNet node
+ ```shell
+ ros2 run opendr_perception object_detection_2d_centernet
+ ```
+ The following optional argument is available for the CenterNet node:
+ - `--backbone BACKBONE`: Backbone network (default=`resnet50_v1b`)
+
+ 5. Nanodet node
+ ```shell
+ ros2 run opendr_perception object_detection_2d_nanodet
+ ```
+ The following optional argument is available for the Nanodet node:
+ - `--model Model`: Model that config file will be used (default=`plus_m_1.5x_416`)
+
+ 6. DETR node
+ ```shell
+ ros2 run opendr_perception object_detection_2d_detr
+ ```
+
+ The following optional arguments are available for all nodes above:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`)
+ - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_objects_annotated`
+ - Detection messages: `/opendr/objects`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### 2D Single Object Tracking ROS2 Node
+
+You can find the single object tracking 2D ROS2 node python script [here](./opendr_perception/object_tracking_2d_siamrpn_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [single object tracking 2D SiamRPN tool](../../../../src/opendr/perception/object_tracking_2d/siamrpn/siamrpn_learner.py) whose documentation can be found [here](../../../../docs/reference/object-tracking-2d-siamrpn.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the single object tracking 2D node:
+
+ ```shell
+ ros2 run opendr_perception object_tracking_2d_siamrpn
+ ```
+
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_tracking_annotated`)
+ - `-t or --tracker_topic TRACKER_TOPIC`: topic name for tracker messages, `None` to stop the node from publishing on this topic (default=`/opendr/tracked_object`)
+ - `--device DEVICE`: Device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_tracking_annotated`
+ - Detection messages: `/opendr/tracked_object`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+To initialize this node it is required to provide a bounding box of an object to track.
+This is achieved by initializing one of the toolkit's 2D object detectors (YOLOv3) and running object detection once on the input.
+Afterwards, **the detected bounding box that is closest to the center of the image** is used to initialize the tracker.
+Feel free to modify the node to initialize it in a different way that matches your use case.
+
+### 2D Object Tracking ROS2 Nodes
+
+For 2D object tracking, there two ROS2 nodes provided, one using Deep Sort and one using FairMOT which use either pretrained models, or custom trained models.
+The predicted tracking annotations are split into two topics with detections and tracking IDs. Additionally, an annotated image is generated.
+
+You can find the 2D object detection ROS2 node python scripts here: [Deep Sort node](./opendr_perception/object_tracking_2d_deep_sort_node.py) and [FairMOT node](./opendr_perception/object_tracking_2d_fair_mot_node.py)
+where you can inspect the code and modify it as you wish to fit your needs.
+The nodes makes use of the toolkit's [object tracking 2D - Deep Sort tool](../../../../src/opendr/perception/object_tracking_2d/deep_sort/object_tracking_2d_deep_sort_learner.py)
+and [object tracking 2D - FairMOT tool](../../../../src/opendr/perception/object_tracking_2d/fair_mot/object_tracking_2d_fair_mot_learner.py)
+whose documentation can be found here: [Deep Sort docs](../../../../docs/reference/object-tracking-2d-deep-sort.md), [FairMOT docs](../../../../docs/reference/object-tracking-2d-fair-mot.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start a 2D object tracking node:
+ 1. Deep Sort node
+ ```shell
+ ros2 run opendr_perception object_tracking_2d_deep_sort
+ ```
+ The following optional argument is available for the Deep Sort node:
+ - `-n --model_name MODEL_NAME`: name of the trained model (default=`deep_sort`)
+ 2. FairMOT node
+ ```shell
+ ros2 run opendr_perception object_tracking_2d_fair_mot
+ ```
+ The following optional argument is available for the FairMOT node:
+ - `-n --model_name MODEL_NAME`: name of the trained model (default=`fairmot_dla34`)
+
+ The following optional arguments are available for both nodes:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_objects_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`)
+ - `-t or --tracking_id_topic TRACKING_ID_TOPIC`: topic name for tracking ID messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects_tracking_id`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `-td --temp_dir TEMP_DIR`: path to a temporary directory with models (default=`temp`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_objects_annotated`
+ - Detection messages: `/opendr/objects`
+ - Tracking ID messages: `/opendr/objects_tracking_id`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+An [image dataset node](#image-dataset-ros2-node) is also provided to be used along these nodes.
+Make sure to change the default input topic of the tracking node if you are not using the USB cam node.
+
+### Panoptic Segmentation ROS2 Node
+
+You can find the panoptic segmentation ROS2 node python script [here](./opendr_perception/panoptic_segmentation_efficient_ps_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [panoptic segmentation tool](../../../../src/opendr/perception/panoptic_segmentation/efficient_ps/efficient_ps_learner.py) whose documentation can be found [here](../../../../docs/reference/efficient-ps.md)
+and additional information about Efficient PS [here](../../../../src/opendr/perception/panoptic_segmentation/README.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the panoptic segmentation node:
+
+ ```shell
+ ros2 run opendr_perception panoptic_segmentation_efficient_ps
+ ```
+
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC` : listen to RGB images on this topic (default=`/image_raw`)
+ - `-oh --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: publish the semantic and instance maps on this topic as `OUTPUT_HEATMAP_TOPIC/semantic` and `OUTPUT_HEATMAP_TOPIC/instance`, `None` to stop the node from publishing on this topic (default=`/opendr/panoptic`)
+ - `-ov --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: publish the panoptic segmentation map as an RGB image on this topic or a more detailed overview if using the `--detailed_visualization` flag, `None` to stop the node from publishing on this topic (default=`opendr/panoptic/rgb_visualization`)
+ - `--detailed_visualization`: generate a combined overview of the input RGB image and the semantic, instance, and panoptic segmentation maps and publish it on `OUTPUT_RGB_IMAGE_TOPIC` (default=deactivated)
+ - `--checkpoint CHECKPOINT` : download pretrained models [cityscapes, kitti] or load from the provided path (default=`cityscapes`)
+
+3. Default output topics:
+ - Output images: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`, `/opendr/panoptic/rgb_visualization`
+ - Detection messages: `/opendr/panoptic/semantic`, `/opendr/panoptic/instance`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Semantic Segmentation ROS2 Node
+
+You can find the semantic segmentation ROS2 node python script [here](./opendr_perception/semantic_segmentation_bisenet_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [semantic segmentation tool](../../../../src/opendr/perception/semantic_segmentation/bisenet/bisenet_learner.py) whose documentation can be found [here](../../../../docs/reference/semantic-segmentation.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the semantic segmentation node:
+
+ ```shell
+ ros2 run opendr_perception semantic_segmentation_bisenet
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_heatmap_topic OUTPUT_HEATMAP_TOPIC`: topic to which we are publishing the heatmap in the form of a ROS2 image containing class IDs, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap`)
+ - `-ov or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic to which we are publishing the heatmap image blended with the input image and a class legend for visualization purposes, `None` to stop the node from publishing on this topic (default=`/opendr/heatmap_visualization`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/heatmap`, `/opendr/heatmap_visualization`
+ - Detection messages: `/opendr/heatmap`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+On the table below you can find the detectable classes and their corresponding IDs:
+
+| Class | Bicyclist | Building | Car | Column Pole | Fence | Pedestrian | Road | Sidewalk | Sign Symbol | Sky | Tree | Unknown |
+|--------|-----------|----------|-----|-------------|-------|------------|------|----------|-------------|-----|------|---------|
+| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
+
+### Image-based Facial Emotion Estimation ROS2 Node
+
+You can find the image-based facial emotion estimation ROS2 node python script [here](./opendr_perception/facial_emotion_estimation_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's image-based facial emotion estimation tool which can be found [here](../../../../src/opendr/perception/facial_expression_recognition/image_based_facial_emotion_estimation/facial_emotion_learner.py)
+whose documentation can be found [here](../../../../docs/reference/image_based_facial_emotion_estimation.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the image-based facial emotion estimation node:
+
+ ```shell
+ ros2 run opendr_perception facial_emotion_estimation
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_emotion_estimation_annotated`)
+ - `-e or --output_emotions_topic OUTPUT_EMOTIONS_TOPIC`: topic to which we are publishing the facial emotion results, `None` to stop the node from publishing on this topic (default=`"/opendr/facial_emotion_estimation"`)
+ - `-m or --output_emotions_description_topic OUTPUT_EMOTIONS_DESCRIPTION_TOPIC`: topic to which we are publishing the description of the estimated facial emotion, `None` to stop the node from publishing on this topic (default=`/opendr/facial_emotion_estimation_description`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Output images: `/opendr/image_emotion_estimation_annotated`
+ - Detection messages: `/opendr/facial_emotion_estimation`, `/opendr/facial_emotion_estimation_description`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+This node requires the detection of a face first. This is achieved by including of the toolkit's face detector and running face detection on the input.
+Afterwards, the detected bounding box of the face is cropped and fed into the facial emotion estimator.
+Feel free to modify the node to detect faces in a different way that matches your use case.
+
+### Landmark-based Facial Expression Recognition ROS2 Node
+
+A ROS2 node for performing landmark-based facial expression recognition using a trained model on AFEW, CK+ or Oulu-CASIA datasets.
+OpenDR does not include a pretrained model, so one should be provided by the user.
+An alternative would be to use the [image-based facial expression estimation node](#image-based-facial-emotion-estimation-ros2-node) provided by the toolkit.
+
+You can find the landmark-based facial expression recognition ROS2 node python script [here](./opendr_perception/landmark_based_facial_expression_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's landmark-based facial expression recognition tool which can be found [here](../../../../src/opendr/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/progressive_spatio_temporal_bln_learner.py)
+whose documentation can be found [here](../../../../docs/reference/landmark-based-facial-expression-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the landmark-based facial expression recognition node:
+
+ ```shell
+ ros2 run opendr_perception landmark_based_facial_expression_recognition
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized facial expression category info, `None` to stop the node from publishing on this topic (default=`"/opendr/landmark_expression_recognition"`)
+ - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic to which we are publishing the description of the recognized facial expression, `None` to stop the node from publishing on this topic (default=`/opendr/landmark_expression_recognition_description`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--model`: architecture to use for facial expression recognition, options are `pstbln_ck+`, `pstbln_casia`, `pstbln_afew` (default=`pstbln_afew`)
+ - `-s or --shape_predictor SHAPE_PREDICTOR`: shape predictor (landmark_extractor) to use (default=`./predictor_path`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/landmark_expression_recognition`, `/opendr/landmark_expression_recognition_description`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Skeleton-based Human Action Recognition ROS2 Node
+
+A ROS2 node for performing skeleton-based human action recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset.
+The human body poses of the image are first extracted by the lightweight OpenPose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized.
+
+You can find the skeleton-based human action recognition ROS2 node python script [here](./opendr_perception/skeleton_based_action_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's skeleton-based human action recognition tool which can be found [here for ST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py)
+and [here for PST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py)
+whose documentation can be found [here](../../../../docs/reference/skeleton-based-action-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the skeleton-based human action recognition node:
+
+ ```shell
+ ros2 run opendr_perception skeleton_based_action_recognition
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output pose-annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)
+ - `-p or --pose_annotations_topic POSE_ANNOTATIONS_TOPIC`: topic name for pose annotations, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)
+ - `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/skeleton_recognized_action"`)
+ - `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/skeleton_recognized_action_description`)
+ - `--model`: model to use, options are `stgcn` or `pstgcn`, (default=`stgcn`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/skeleton_based_action_recognition`, `/opendr/skeleton_based_action_recognition_description`, `/opendr/poses`
+ - Output images: `/opendr/image_pose_annotated`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### Video Human Activity Recognition ROS2 Node
+
+A ROS2 node for performing human activity recognition using either CoX3D or X3D models pretrained on Kinetics400.
+
+You can find the video human activity recognition ROS2 node python script [here](./opendr_perception/video_activity_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's video human activity recognition tools which can be found [here for CoX3D](../../../../src/opendr/perception/activity_recognition/cox3d/cox3d_learner.py) and
+[here for X3D](../../../../src/opendr/perception/activity_recognition/x3d/x3d_learner.py) whose documentation can be found [here](../../../../docs/reference/activity-recognition.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the video human activity recognition node:
+
+ ```shell
+ ros2 run opendr_perception video_activity_recognition
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/image_raw`)
+ - `-o or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic to which we are publishing the recognized activity, `None` to stop the node from publishing on this topic (default=`"/opendr/human_activity_recognition"`)
+ - `-od or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic to which we are publishing the ID of the recognized action, `None` to stop the node from publishing on this topic (default=`/opendr/human_activity_recognition_description`)
+ - `--model`: architecture to use for human activity recognition, options are `cox3d-s`, `cox3d-m`, `cox3d-l`, `x3d-xs`, `x3d-s`, `x3d-m`, or `x3d-l` (default=`cox3d-m`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/human_activity_recognition`, `/opendr/human_activity_recognition_description`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+You can find the corresponding IDs regarding activity recognition [here](https://github.com/opendr-eu/opendr/blob/master/src/opendr/perception/activity_recognition/datasets/kinetics400_classes.csv).
+
+## RGB + Infrared input
+
+### 2D Object Detection GEM ROS2 Node
+
+You can find the object detection 2D GEM ROS2 node python script [here](./opendr_perception/object_detection_2d_gem_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [object detection 2D GEM tool](../../../../src/opendr/perception/object_detection_2d/gem/gem_learner.py)
+whose documentation can be found [here](../../../../docs/reference/gem.md).
+
+#### Instructions for basic usage:
+
+1. First one needs to find points in the color and infrared images that correspond, in order to find the homography matrix that allows to correct for the difference in perspective between the infrared and the RGB camera.
+ These points can be selected using a [utility tool](../../../../src/opendr/perception/object_detection_2d/utils/get_color_infra_alignment.py) that is provided in the toolkit.
+
+2. Pass the points you have found as *pts_color* and *pts_infra* arguments to the [ROS2 GEM node](./opendr_perception/object_detection_2d_gem.py).
+
+3. Start the node responsible for publishing images. If you have a RealSense camera, then you can use the corresponding node (assuming you have installed [realsense2_camera](http://wiki.ros.org/realsense2_camera)):
+
+ ```shell
+ roslaunch realsense2_camera rs_camera.launch enable_color:=true enable_infra:=true enable_depth:=false enable_sync:=true infra_width:=640 infra_height:=480
+ ```
+
+4. You are then ready to start the object detection 2d GEM node:
+
+ ```shell
+ ros2 run opendr_perception object_detection_2d_gem
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/camera/color/image_raw`)
+ - `-ii or --input_infra_image_topic INPUT_INFRA_IMAGE_TOPIC`: topic name for input infrared image (default=`/camera/infra/image_raw`)
+ - `-oc or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/rgb_image_objects_annotated`)
+ - `-oi or --output_infra_image_topic OUTPUT_INFRA_IMAGE_TOPIC`: topic name for output annotated infrared image, `None` to stop the node from publishing on this topic (default=`/opendr/infra_image_objects_annotated`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+5. Default output topics:
+ - Output RGB images: `/opendr/rgb_image_objects_annotated`
+ - Output infrared images: `/opendr/infra_image_objects_annotated`
+ - Detection messages: `/opendr/objects`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## RGBD input
+
+### RGBD Hand Gesture Recognition ROS2 Node
+A ROS2 node for performing hand gesture recognition using a MobileNetv2 model trained on HANDS dataset.
+The node has been tested with Kinectv2 for depth data acquisition with the following drivers: https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2.
+
+You can find the RGBD hand gesture recognition ROS2 node python script [here](./opendr_perception/rgbd_hand_gesture_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [hand gesture recognition tool](../../../../src/opendr/perception/multimodal_human_centric/rgbd_hand_gesture_learner/rgbd_hand_gesture_learner.py)
+whose documentation can be found [here](../../../../docs/reference/rgbd-hand-gesture-learner.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images from an RGBD camera. Remember to modify the input topics using the arguments in step 2 if needed.
+
+2. You are then ready to start the hand gesture recognition node:
+ ```shell
+ ros2 run opendr_perception rgbd_hand_gesture_recognition
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-ic or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/kinect2/qhd/image_color_rect`)
+ - `-id or --input_depth_image_topic INPUT_DEPTH_IMAGE_TOPIC`: topic name for input depth image (default=`/kinect2/qhd/image_depth_rect`)
+ - `-o or --output_gestures_topic OUTPUT_GESTURES_TOPIC`: topic name for predicted gesture class (default=`/opendr/gestures`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+
+3. Default output topics:
+ - Detection messages:`/opendr/gestures`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## RGB + Audio input
+
+### Audiovisual Emotion Recognition ROS2 Node
+
+You can find the audiovisual emotion recognition ROS2 node python script [here](./opendr_perception/audiovisual_emotion_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [audiovisual emotion recognition tool](../../../../src/opendr/perception/multimodal_human_centric/audiovisual_emotion_learner/avlearner.py),
+whose documentation can be found [here](../../../../docs/reference/audiovisual-emotion-recognition-learner.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
+2. Start the node responsible for publishing audio. If you have an audio capture device, then you can use the `audio_capture_node` as explained in the [prerequisites above](#prerequisites).
+3. You are then ready to start the audiovisual emotion recognition node
+
+ ```shell
+ ros2 run opendr_perception audiovisual_emotion_recognition
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-iv or --input_video_topic INPUT_VIDEO_TOPIC`: topic name for input video, expects detected face of size 224x224 (default=`/image_raw`)
+ - `-ia or --input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio`)
+ - `-o or --output_emotions_topic OUTPUT_EMOTIONS_TOPIC`: topic to which we are publishing the predicted emotion (default=`/opendr/audiovisual_emotion`)
+ - `--buffer_size BUFFER_SIZE`: length of audio and video in seconds, (default=`3.6`)
+ - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server
+
+4. Default output topics:
+ - Detection messages: `/opendr/audiovisual_emotion`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## Audio input
+
+### Speech Command Recognition ROS2 Node
+
+A ROS2 node for recognizing speech commands from an audio stream using MatchboxNet, EdgeSpeechNets or Quadratic SelfONN models, pretrained on the Google Speech Commands dataset.
+
+You can find the speech command recognition ROS2 node python script [here](./opendr_perception/speech_command_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's speech command recognition tools:
+[EdgeSpeechNets tool](../../../../src/opendr/perception/speech_recognition/edgespeechnets/edgespeechnets_learner.py), [MatchboxNet tool](../../../../src/opendr/perception/speech_recognition/matchboxnet/matchboxnet_learner.py), [Quadratic SelfONN tool](../../../../src/opendr/perception/speech_recognition/quadraticselfonn/quadraticselfonn_learner.py)
+whose documentation can be found here:
+[EdgeSpeechNet docs](../../../../docs/reference/edgespeechnets.md), [MatchboxNet docs](../../../../docs/reference/matchboxnet.md), [Quadratic SelfONN docs](../../../../docs/reference/quadratic-selfonn.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing audio. If you have an audio capture device, then you can use the `audio_capture_node` as explained in the [prerequisites above](#prerequisites).
+
+2. You are then ready to start the speech command recognition node
+
+ ```shell
+ ros2 run opendr_perception speech_command_recognition
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_audio_topic INPUT_AUDIO_TOPIC`: topic name for input audio (default=`/audio`)
+ - `-o or --output_speech_command_topic OUTPUT_SPEECH_COMMAND_TOPIC`: topic name for speech command output (default=`/opendr/speech_recognition`)
+ - `--buffer_size BUFFER_SIZE`: set the size of the audio buffer (expected command duration) in seconds (default=`1.5`)
+ - `--model MODEL`: the model to use, choices are `matchboxnet`, `edgespeechnets` or `quad_selfonn` (default=`matchboxnet`)
+ - `--model_path MODEL_PATH`: if given, the pretrained model will be loaded from the specified local path, otherwise it will be downloaded from an OpenDR FTP server
+
+3. Default output topics:
+ - Detection messages, class id and confidence: `/opendr/speech_recognition`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+**Notes**
+
+EdgeSpeechNets currently does not have a pretrained model available for download, only local files may be used.
+
+----
+## Point cloud input
+
+### 3D Object Detection Voxel ROS2 Node
+
+A ROS2 node for performing 3D object detection Voxel using PointPillars or TANet methods with either pretrained models on KITTI dataset, or custom trained models.
+
+You can find the 3D object detection Voxel ROS2 node python script [here](./opendr_perception/object_detection_3d_voxel_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [3D object detection Voxel tool](../../../../src/opendr/perception/object_detection_3d/voxel_object_detection_3d/voxel_object_detection_3d_learner.py)
+whose documentation can be found [here](../../../../docs/reference/voxel-object-detection-3d.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing point clouds. OpenDR provides a [point cloud dataset node](#point-cloud-dataset-ros2-node) for convenience.
+
+2. You are then ready to start the 3D object detection node:
+
+ ```shell
+ ros2 run opendr_perception object_detection_3d_voxel
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC`: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=`/opendr/dataset_point_cloud`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/objects3d`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `-n or --model_name MODEL_NAME`: name of the trained model (default=`tanet_car_xyres_16`)
+ - `-c or --model_config_path MODEL_CONFIG_PATH`: path to a model .proto config (default=`../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/objects3d`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+### 3D Object Tracking AB3DMOT ROS2 Node
+
+A ROS2 node for performing 3D object tracking using AB3DMOT stateless method.
+This is a detection-based method, and therefore the 3D object detector is needed to provide detections, which then will be used to make associations and generate tracking ids.
+The predicted tracking annotations are split into two topics with detections and tracking IDs.
+
+You can find the 3D object tracking AB3DMOT ROS2 node python script [here](./opendr_perception/object_tracking_3d_ab3dmot_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's [3D object tracking AB3DMOT tool](../../../../src/opendr/perception/object_tracking_3d/ab3dmot/object_tracking_3d_ab3dmot_learner.py)
+whose documentation can be found [here](../../../../docs/reference/object-tracking-3d-ab3dmot.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing point clouds. OpenDR provides a [point cloud dataset node](#point-cloud-dataset-ros2-node) for convenience.
+
+2. You are then ready to start the 3D object tracking node:
+
+ ```shell
+ ros2 run opendr_perception object_tracking_3d_ab3dmot
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_point_cloud_topic INPUT_POINT_CLOUD_TOPIC`: point cloud topic provided by either a point_cloud_dataset_node or any other 3D point cloud node (default=`/opendr/dataset_point_cloud`)
+ - `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/objects3d`)
+ - `-t or --tracking3d_id_topic TRACKING3D_ID_TOPIC`: topic name for output tracking IDs with the same element count as in detection topic, `None` to stop the node from publishing on this topic (default=`/opendr/objects_tracking_id`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `-dn or --detector_model_name DETECTOR_MODEL_NAME`: name of the trained model (default=`tanet_car_xyres_16`)
+ - `-dc or --detector_model_config_path DETECTOR_MODEL_CONFIG_PATH`: path to a model .proto config (default=`../../src/opendr/perception/object_detection3d/voxel_object_detection_3d/second_detector/configs/tanet/car/xyres_16.proto`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/objects3d`
+ - Tracking ID messages: `/opendr/objects_tracking_id`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## Biosignal input
+
+### Heart Anomaly Detection ROS2 Node
+
+A ROS2 node for performing heart anomaly (atrial fibrillation) detection from ECG data using GRU or ANBOF models trained on AF dataset.
+
+You can find the heart anomaly detection ROS2 node python script [here](./opendr_perception/heart_anomaly_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
+The node makes use of the toolkit's heart anomaly detection tools: [ANBOF tool](../../../../src/opendr/perception/heart_anomaly_detection/attention_neural_bag_of_feature/attention_neural_bag_of_feature_learner.py) and
+[GRU tool](../../../../src/opendr/perception/heart_anomaly_detection/gated_recurrent_unit/gated_recurrent_unit_learner.py), whose documentation can be found here:
+[ANBOF docs](../../../../docs/reference/attention-neural-bag-of-feature-learner.md) and [GRU docs](../../../../docs/reference/gated-recurrent-unit-learner.md).
+
+#### Instructions for basic usage:
+
+1. Start the node responsible for publishing ECG data.
+
+2. You are then ready to start the heart anomaly detection node:
+
+ ```shell
+ ros2 run opendr_perception heart_anomaly_detection
+ ```
+ The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-i or --input_ecg_topic INPUT_ECG_TOPIC`: topic name for input ECG data (default=`/ecg/ecg`)
+ - `-o or --output_heart_anomaly_topic OUTPUT_HEART_ANOMALY_TOPIC`: topic name for heart anomaly detection (default=`/opendr/heart_anomaly`)
+ - `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
+ - `--model MODEL`: the model to use, choices are `anbof` or `gru` (default=`anbof`)
+
+3. Default output topics:
+ - Detection messages: `/opendr/heart_anomaly`
+
+ For viewing the output, refer to the [notes above.](#notes)
+
+----
+## Dataset ROS2 Nodes
+
+The dataset nodes can be used to publish data from the disk, which is useful to test the functionality without the use of a sensor.
+Dataset nodes use a provided `DatasetIterator` object that returns a `(Data, Target)` pair.
+If the type of the `Data` object is correct, the node will transform it into a corresponding ROS2 message object and publish it to a desired topic.
+The OpenDR toolkit currently provides two such nodes, an image dataset node and a point cloud dataset node.
+
+### Image Dataset ROS2 Node
+
+The image dataset node downloads a `nano_MOT20` dataset from OpenDR's FTP server and uses it to publish data to the ROS2 topic,
+which is intended to be used with the [2D object tracking nodes](#2d-object-tracking-ros2-nodes).
+
+You can create an instance of this node with any `DatasetIterator` object that returns `(Image, Target)` as elements,
+to use alongside other nodes and datasets.
+You can inspect [the node](./opendr_perception/image_dataset_node.py) and modify it to your needs for other image datasets.
+
+To get an image from a dataset on the disk, you can start a `image_dataset.py` node as:
+```shell
+ros2 run opendr_perception image_dataset
+```
+The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-o or --output_rgb_image_topic`: topic name to publish the data (default=`/opendr/dataset_image`)
+ - `-f or --fps FPS`: data fps (default=`10`)
+ - `-d or --dataset_path DATASET_PATH`: path to a dataset (default=`/MOT`)
+ - `-ks or --mot20_subsets_path MOT20_SUBSETS_PATH`: path to MOT20 subsets (default=`../../src/opendr/perception/object_tracking_2d/datasets/splits/nano_mot20.train`)
+
+### Point Cloud Dataset ROS2 Node
+
+The point cloud dataset node downloads a `nano_KITTI` dataset from OpenDR's FTP server and uses it to publish data to the ROS2 topic,
+which is intended to be used with the [3D object detection node](#3d-object-detection-voxel-ros2-node),
+as well as the [3D object tracking node](#3d-object-tracking-ab3dmot-ros2-node).
+
+You can create an instance of this node with any `DatasetIterator` object that returns `(PointCloud, Target)` as elements,
+to use alongside other nodes and datasets.
+You can inspect [the node](./opendr_perception/point_cloud_dataset_node.py) and modify it to your needs for other point cloud datasets.
+
+To get a point cloud from a dataset on the disk, you can start a `point_cloud_dataset.py` node as:
+```shell
+ros2 run opendr_perception point_cloud_dataset
+```
+The following optional arguments are available:
+ - `-h or --help`: show a help message and exit
+ - `-o or --output_point_cloud_topic`: topic name to publish the data (default=`/opendr/dataset_point_cloud`)
+ - `-f or --fps FPS`: data fps (default=`10`)
+ - `-d or --dataset_path DATASET_PATH`: path to a dataset, if it does not exist, nano KITTI dataset will be downloaded there (default=`/KITTI/opendr_nano_kitti`)
+ - `-ks or --kitti_subsets_path KITTI_SUBSETS_PATH`: path to KITTI subsets, used only if a KITTI dataset is downloaded (default=`../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets`)
diff --git a/projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/Rotate_and_Render/__init__.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/__init__.py
similarity index 100%
rename from projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/Rotate_and_Render/__init__.py
rename to projects/opendr_ws_2/src/opendr_perception/opendr_perception/__init__.py
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/audiovisual_emotion_recognition_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/audiovisual_emotion_recognition_node.py
new file mode 100644
index 0000000000..008b51d7b7
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/audiovisual_emotion_recognition_node.py
@@ -0,0 +1,167 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import argparse
+import numpy as np
+import torch
+import librosa
+import cv2
+
+import rclpy
+from rclpy.node import Node
+import message_filters
+from sensor_msgs.msg import Image as ROS_Image
+from audio_common_msgs.msg import AudioData
+from vision_msgs.msg import Classification2D
+
+from opendr_bridge import ROS2Bridge
+from opendr.perception.multimodal_human_centric import AudiovisualEmotionLearner
+from opendr.perception.multimodal_human_centric import spatial_transforms as transforms
+from opendr.engine.data import Video, Timeseries
+
+
+class AudiovisualEmotionNode(Node):
+
+ def __init__(self, input_video_topic="/image_raw", input_audio_topic="/audio",
+ output_emotions_topic="/opendr/audiovisual_emotion", buffer_size=3.6, device="cuda",
+ delay=0.1):
+ """
+ Creates a ROS2 Node for audiovisual emotion recognition
+ :param input_video_topic: Topic from which we are reading the input video. Expects detected face of size 224x224
+ :type input_video_topic: str
+ :param input_audio_topic: Topic from which we are reading the input audio
+ :type input_audio_topic: str
+ :param output_emotions_topic: Topic to which we are publishing the predicted class
+ :type output_emotions_topic: str
+ :param buffer_size: length of audio and video in sec
+ :type buffer_size: float
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param delay: Define the delay (in seconds) with which rgb message and depth message can be synchronized
+ :type delay: float
+ """
+ super().__init__("opendr_audiovisual_emotion_recognition_node")
+
+ self.publisher = self.create_publisher(Classification2D, output_emotions_topic, 1)
+
+ video_sub = message_filters.Subscriber(self, ROS_Image, input_video_topic, qos_profile=1)
+ audio_sub = message_filters.Subscriber(self, AudioData, input_audio_topic, qos_profile=1)
+ # synchronize video and audio data topics
+ ts = message_filters.ApproximateTimeSynchronizer([video_sub, audio_sub], queue_size=10, slop=delay,
+ allow_headerless=True)
+ ts.registerCallback(self.callback)
+
+ self.bridge = ROS2Bridge()
+
+ self.avlearner = AudiovisualEmotionLearner(device=device, fusion='ia', mod_drop='zerodrop')
+ if not os.path.exists('model'):
+ self.avlearner.download('model')
+ self.avlearner.load('model')
+
+ self.buffer_size = buffer_size
+ self.data_buffer = np.zeros((1))
+ self.video_buffer = np.zeros((1, 224, 224, 3))
+
+ self.video_transform = transforms.Compose([
+ transforms.ToTensor(255)])
+
+ self.get_logger().info("Audiovisual emotion recognition node started!")
+
+ def callback(self, image_data, audio_data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param image_data: input image message, face image
+ :type image_data: sensor_msgs.msg.Image
+ :param audio_data: input audio message, speech
+ :type audio_data: audio_common_msgs.msg.AudioData
+ """
+ audio_data = np.reshape(np.frombuffer(audio_data.data, dtype=np.int16)/32768.0, (1, -1))
+ self.data_buffer = np.append(self.data_buffer, audio_data)
+
+ image_data = self.bridge.from_ros_image(image_data, encoding='bgr8').convert(format='channels_last')
+ image_data = cv2.resize(image_data, (224, 224))
+
+ self.video_buffer = np.append(self.video_buffer, np.expand_dims(image_data.data, 0), axis=0)
+
+ if self.data_buffer.shape[0] > 16000*self.buffer_size:
+ audio = librosa.feature.mfcc(self.data_buffer[1:], sr=16000, n_mfcc=10)
+ audio = Timeseries(audio)
+
+ to_select = select_distributed(15, len(self.video_buffer)-1)
+ video = self.video_buffer[1:][to_select]
+
+ video = [self.video_transform(img) for img in video]
+ video = Video(torch.stack(video, 0).permute(1, 0, 2, 3))
+
+ class_pred = self.avlearner.infer(audio, video)
+
+ # Publish output
+ ros_class = self.bridge.from_category_to_rosclass(class_pred, self.get_clock().now().to_msg())
+ self.publisher.publish(ros_class)
+
+ self.data_buffer = np.zeros((1))
+ self.video_buffer = np.zeros((1, 224, 224, 3))
+
+
+def select_distributed(m, n): return [i*n//m + n//(2*m) for i in range(m)]
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-iv", "--input_video_topic", type=str, default="/image_raw",
+ help="Listen to video input data on this topic")
+ parser.add_argument("-ia", "--input_audio_topic", type=str, default="/audio",
+ help="Listen to audio input data on this topic")
+ parser.add_argument("-o", "--output_emotions_topic", type=str, default="/opendr/audiovisual_emotion",
+ help="Topic name for output emotions recognition")
+ parser.add_argument("--device", type=str, default="cuda",
+ help="Device to use (cpu, cuda)", choices=["cuda", "cpu"])
+ parser.add_argument("--buffer_size", type=float, default=3.6,
+ help="Size of the audio buffer in seconds")
+ parser.add_argument("--delay", help="The delay (in seconds) with which RGB message and"
+ "depth message can be synchronized", type=float, default=0.1)
+
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
+ except:
+ print("Using CPU")
+ device = "cpu"
+
+ emotion_node = AudiovisualEmotionNode(input_video_topic=args.input_video_topic,
+ input_audio_topic=args.input_audio_topic,
+ output_emotions_topic=args.output_emotions_topic,
+ buffer_size=args.buffer_size, device=device, delay=args.delay)
+
+ rclpy.spin(emotion_node)
+
+ emotion_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/face_detection_retinaface_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/face_detection_retinaface_node.py
new file mode 100644
index 0000000000..b4c20114c8
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/face_detection_retinaface_node.py
@@ -0,0 +1,148 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.perception.object_detection_2d import RetinaFaceLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+from opendr.engine.data import Image
+
+
+class FaceDetectionNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_faces_annotated",
+ detections_topic="/opendr/faces", device="cuda", backbone="resnet"):
+ """
+ Creates a ROS2 Node for face detection with Retinaface.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no face detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: retinaface backbone, options are either 'mnet' or 'resnet',
+ where 'mnet' detects masked faces as well
+ :type backbone: str
+ """
+ super().__init__('opendr_face_detection_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.face_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.face_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ self.face_detector = RetinaFaceLearner(backbone=backbone, device=device)
+ self.face_detector.download(path=".", verbose=True)
+ self.face_detector.load("retinaface_{}".format(backbone))
+ self.class_names = ["face", "masked_face"]
+
+ self.get_logger().info("Face detection retinaface node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run face detection
+ boxes = self.face_detector.infer(image)
+
+ if self.face_publisher is not None:
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ self.face_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with face detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.class_names)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_faces_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/faces")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone",
+ help="Retinaface backbone, options are either 'mnet' or 'resnet', where 'mnet' detects "
+ "masked faces as well",
+ type=str, default="resnet", choices=["resnet", "mnet"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ face_detection_node = FaceDetectionNode(device=device, backbone=args.backbone,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+
+ rclpy.spin(face_detection_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ face_detection_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/face_recognition_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/face_recognition_node.py
new file mode 100644
index 0000000000..b774ea0eb9
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/face_recognition_node.py
@@ -0,0 +1,193 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import argparse
+import torch
+
+import rclpy
+from rclpy.node import Node
+
+from std_msgs.msg import String
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import ObjectHypothesisWithPose
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.face_recognition import FaceRecognitionLearner
+from opendr.perception.object_detection_2d import RetinaFaceLearner
+from opendr.perception.object_detection_2d.datasets.transforms import BoundingBoxListToNumpyArray
+
+
+class FaceRecognitionNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_face_reco_annotated",
+ detections_topic="/opendr/face_recognition", detections_id_topic="/opendr/face_recognition_id",
+ database_path="./database", device="cuda", backbone="mobilefacenet"):
+ """
+ Creates a ROS2 Node for face recognition.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the recognized face information (if None,
+ no face recognition message is published)
+ :type detections_topic: str
+ :param detections_id_topic: Topic to which we are publishing the ID of the recognized person (if None,
+ no ID message is published)
+ :type detections_id_topic: str
+ :param device: Device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: Backbone network
+ :type backbone: str
+ :param database_path: Path of the directory where the images of the faces to be recognized are stored
+ :type database_path: str
+ """
+ super().__init__('opendr_face_recognition_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.face_publisher = self.create_publisher(ObjectHypothesisWithPose, detections_topic, 1)
+ else:
+ self.face_publisher = None
+
+ if detections_id_topic is not None:
+
+ self.face_id_publisher = self.create_publisher(String, detections_id_topic, 1)
+ else:
+ self.face_id_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the face recognizer
+ self.recognizer = FaceRecognitionLearner(device=device, mode='backbone_only', backbone=backbone)
+ self.recognizer.download(path=".")
+ self.recognizer.load(".")
+ self.recognizer.fit_reference(database_path, save_path=".", create_new=True)
+
+ # Initialize the face detector
+ self.face_detector = RetinaFaceLearner(backbone='mnet', device=device)
+ self.face_detector.download(path=".", verbose=True)
+ self.face_detector.load("retinaface_{}".format('mnet'))
+ self.class_names = ["face", "masked_face"]
+
+ self.get_logger().info("Face recognition node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+ # Get an OpenCV image back
+ image = image.opencv()
+
+ # Run face detection and recognition
+ if image is not None:
+ bounding_boxes = self.face_detector.infer(image)
+ if bounding_boxes:
+ bounding_boxes = BoundingBoxListToNumpyArray()(bounding_boxes)
+ boxes = bounding_boxes[:, :4]
+ for idx, box in enumerate(boxes):
+ (startX, startY, endX, endY) = int(box[0]), int(box[1]), int(box[2]), int(box[3])
+ frame = image[startY:endY, startX:endX]
+ result = self.recognizer.infer(frame)
+
+ # Publish face information and ID
+ if self.face_publisher is not None:
+ self.face_publisher.publish(self.bridge.to_ros_face(result))
+
+ if self.face_id_publisher is not None:
+ self.face_id_publisher.publish(self.bridge.to_ros_face_id(result))
+
+ if self.image_publisher is not None:
+ if result.description != 'Not found':
+ color = (0, 255, 0)
+ else:
+ color = (0, 0, 255)
+ # Annotate image with face detection/recognition boxes
+ cv2.rectangle(image, (startX, startY), (endX, endY), color, thickness=2)
+ cv2.putText(image, result.description, (startX, endY - 10), cv2.FONT_HERSHEY_SIMPLEX,
+ 1, color, 2, cv2.LINE_AA)
+
+ if self.image_publisher is not None:
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_face_reco_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/face_recognition")
+ parser.add_argument("-id", "--detections_id_topic", help="Topic name for detection ID messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/face_recognition_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to mobilefacenet",
+ type=str, default="mobilefacenet", choices=["mobilefacenet"])
+ parser.add_argument("--dataset_path",
+ help="Path of the directory where the images of the faces to be recognized are stored, "
+ "defaults to \"./database\"",
+ type=str, default="./database")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ face_recognition_node = FaceRecognitionNode(device=device, backbone=args.backbone, database_path=args.dataset_path,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ detections_id_topic=args.detections_id_topic)
+
+ rclpy.spin(face_recognition_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ face_recognition_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/facial_emotion_estimation_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/facial_emotion_estimation_node.py
new file mode 100644
index 0000000000..56b22309c0
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/facial_emotion_estimation_node.py
@@ -0,0 +1,218 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+import numpy as np
+import cv2
+from torchvision import transforms
+import PIL
+
+import rclpy
+from rclpy.node import Node
+from std_msgs.msg import String
+from vision_msgs.msg import ObjectHypothesis
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.facial_expression_recognition import FacialEmotionLearner
+from opendr.perception.facial_expression_recognition import image_processing
+from opendr.perception.object_detection_2d import RetinaFaceLearner
+from opendr.perception.object_detection_2d.datasets.transforms import BoundingBoxListToNumpyArray
+
+INPUT_IMAGE_SIZE = (96, 96)
+INPUT_IMAGE_NORMALIZATION_MEAN = [0.0, 0.0, 0.0]
+INPUT_IMAGE_NORMALIZATION_STD = [1.0, 1.0, 1.0]
+
+
+class FacialEmotionEstimationNode(Node):
+ def __init__(self,
+ face_detector_learner,
+ input_rgb_image_topic="/image_raw",
+ output_rgb_image_topic="/opendr/image_emotion_estimation_annotated",
+ output_emotions_topic="/opendr/facial_emotion_estimation",
+ output_emotions_description_topic="/opendr/facial_emotion_estimation_description",
+ device="cuda"):
+ """
+ Creates a ROS Node for facial emotion estimation.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param output_emotions_topic: Topic to which we are publishing the facial emotion results
+ (if None, we are not publishing the info)
+ :type output_emotions_topic: str
+ :param output_emotions_description_topic: Topic to which we are publishing the description of the estimated
+ facial emotion (if None, we are not publishing the description)
+ :type output_emotions_description_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ super().__init__('opendr_facial_emotion_estimation_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+ self.bridge = ROS2Bridge()
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if output_emotions_topic is not None:
+ self.hypothesis_publisher = self.create_publisher(ObjectHypothesis, output_emotions_topic, 1)
+ else:
+ self.hypothesis_publisher = None
+
+ if output_emotions_description_topic is not None:
+ self.string_publisher = self.create_publisher(String, output_emotions_description_topic, 1)
+ else:
+ self.string_publisher = None
+
+ # Initialize the face detector
+ self.face_detector = face_detector_learner
+
+ # Initialize the facial emotion estimator
+ self.facial_emotion_estimator = FacialEmotionLearner(device=device, batch_size=2,
+ ensemble_size=9,
+ name_experiment='esr_9')
+ self.facial_emotion_estimator.init_model(num_branches=9)
+ model_saved_path = self.facial_emotion_estimator.download(path=None, mode="pretrained")
+ self.facial_emotion_estimator.load(ensemble_size=9, path_to_saved_network=model_saved_path)
+
+ self.get_logger().info("Facial emotion estimation node started.")
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8').opencv()
+ emotion = None
+ # Run face detection and emotion estimation
+
+ if image is not None:
+ bounding_boxes = self.face_detector.infer(image)
+ if bounding_boxes:
+ bounding_boxes = BoundingBoxListToNumpyArray()(bounding_boxes)
+ boxes = bounding_boxes[:, :4]
+ for idx, box in enumerate(boxes):
+ (startX, startY, endX, endY) = int(box[0]), int(box[1]), int(box[2]), int(box[3])
+ face_crop = image[startY:endY, startX:endX]
+
+ # Preprocess detected face
+ input_face = _pre_process_input_image(face_crop)
+
+ # Recognize facial expression
+ emotion, affect = self.facial_emotion_estimator.infer(input_face)
+
+ # Converts from Tensor to ndarray
+ affect = np.array([a.cpu().detach().numpy() for a in affect])
+ affect = affect[0] # a numpy array of valence and arousal values
+ emotion = emotion[0] # the emotion class with confidence tensor
+
+ cv2.rectangle(image, (startX, startY), (endX, endY), (0, 255, 255), thickness=2)
+ cv2.putText(image, "Valence: %.2f" % affect[0], (startX, endY - 30), cv2.FONT_HERSHEY_SIMPLEX,
+ 0.5, (0, 255, 255), 1, cv2.LINE_AA)
+ cv2.putText(image, "Arousal: %.2f" % affect[1], (startX, endY - 15), cv2.FONT_HERSHEY_SIMPLEX,
+ 0.5, (0, 255, 255), 1, cv2.LINE_AA)
+ cv2.putText(image, emotion.description, (startX, endY), cv2.FONT_HERSHEY_SIMPLEX,
+ 0.5, (0, 255, 255), 1, cv2.LINE_AA)
+
+ if self.hypothesis_publisher is not None and emotion:
+ self.hypothesis_publisher.publish(self.bridge.to_ros_category(emotion))
+
+ if self.string_publisher is not None and emotion:
+ self.string_publisher.publish(self.bridge.to_ros_category_description(emotion))
+
+ if self.image_publisher is not None:
+ # Convert the annotated OpenDR image to ROS image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def _pre_process_input_image(image):
+ """
+ Pre-processes an image for ESR-9.
+
+ :param image: (ndarray)
+ :return: (ndarray) image
+ """
+
+ image = image_processing.resize(image, INPUT_IMAGE_SIZE)
+ image = PIL.Image.fromarray(image)
+ image = transforms.Normalize(mean=INPUT_IMAGE_NORMALIZATION_MEAN,
+ std=INPUT_IMAGE_NORMALIZATION_STD)(transforms.ToTensor()(image)).unsqueeze(0)
+
+ return image
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-i', '--input_rgb_image_topic', type=str, help='Topic name for input rgb image',
+ default='/image_raw')
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_emotion_estimation_annotated")
+ parser.add_argument("-e", "--output_emotions_topic", help="Topic name for output emotion",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/facial_emotion_estimation")
+ parser.add_argument('-m', '--output_emotions_description_topic',
+ help='Topic to which we are publishing the description of the estimated facial emotion',
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/facial_emotion_estimation_description")
+ parser.add_argument('-d', '--device', help='Device to use, either cpu or cuda',
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ print("GPU found.")
+ device = 'cuda'
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = 'cpu'
+ except:
+ print("Using CPU")
+ device = 'cpu'
+
+ # Initialize the face detector
+ face_detector = RetinaFaceLearner(backbone="resnet", device=device)
+ face_detector.download(path=".", verbose=True)
+ face_detector.load("retinaface_{}".format("resnet"))
+
+ facial_emotion_estimation_node = FacialEmotionEstimationNode(
+ face_detector,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ output_emotions_topic=args.output_emotions_topic,
+ output_emotions_description_topic=args.output_emotions_description_topic,
+ device=device)
+
+ rclpy.spin(facial_emotion_estimation_node)
+ facial_emotion_estimation_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/fall_detection_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/fall_detection_node.py
new file mode 100644
index 0000000000..3057bc7f83
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/fall_detection_node.py
@@ -0,0 +1,189 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import argparse
+import torch
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.engine.target import BoundingBox, BoundingBoxList
+from opendr.perception.pose_estimation import get_bbox
+from opendr.perception.pose_estimation import LightweightOpenPoseLearner
+from opendr.perception.fall_detection import FallDetectorLearner
+
+
+class FallDetectionNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_fallen_annotated",
+ detections_topic="/opendr/fallen", device="cuda",
+ num_refinement_stages=2, use_stride=False, half_precision=False):
+ """
+ Creates a ROS2 Node for rule-based fall detection based on Lightweight OpenPose.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no pose detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param num_refinement_stages: Specifies the number of pose estimation refinement stages are added on the
+ model's head, including the initial stage. Can be 0, 1 or 2, with more stages meaning slower and more accurate
+ inference
+ :type num_refinement_stages: int
+ :param use_stride: Whether to add a stride value in the model, which reduces accuracy but increases
+ inference speed
+ :type use_stride: bool
+ :param half_precision: Enables inference using half (fp16) precision instead of single (fp32) precision.
+ Valid only for GPU-based inference
+ :type half_precision: bool
+ """
+ super().__init__('opendr_fall_detection_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.fall_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.fall_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the pose estimation learner
+ self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=num_refinement_stages,
+ mobilenet_use_stride=use_stride,
+ half_precision=half_precision)
+ self.pose_estimator.download(path=".", verbose=True)
+ self.pose_estimator.load("openpose_default")
+
+ # Initialize the fall detection learner
+ self.fall_detector = FallDetectorLearner(self.pose_estimator)
+
+ self.get_logger().info("Fall detection node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run fall detection
+ detections = self.fall_detector.infer(image)
+
+ # Get an OpenCV image back
+ image = image.opencv()
+
+ bboxes = BoundingBoxList([])
+ fallen_pose_id = 0
+ for detection in detections:
+ fallen = detection[0].data
+ pose = detection[2]
+ x, y, w, h = get_bbox(pose)
+
+ if fallen == 1:
+ if self.image_publisher is not None:
+ # Paint person bounding box inferred from pose
+ color = (0, 0, 255)
+ cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
+ cv2.putText(image, "Fallen person", (x, y + h - 10), cv2.FONT_HERSHEY_SIMPLEX,
+ 1, color, 2, cv2.LINE_AA)
+
+ if self.fall_publisher is not None:
+ # Convert detected boxes to ROS type and add to list
+ bboxes.data.append(BoundingBox(left=x, top=y, width=w, height=h, name=fallen_pose_id))
+ fallen_pose_id += 1
+
+ if self.fall_publisher is not None:
+ if len(bboxes) > 0:
+ self.fall_publisher.publish(self.bridge.to_ros_boxes(bboxes))
+
+ if self.image_publisher is not None:
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_fallen_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/fallen")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--accelerate", help="Enables acceleration flags (e.g., stride)", default=False,
+ action="store_true")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ if args.accelerate:
+ stride = True
+ stages = 0
+ half_prec = True
+ else:
+ stride = False
+ stages = 2
+ half_prec = False
+
+ fall_detection_node = FallDetectionNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ num_refinement_stages=stages, use_stride=stride, half_precision=half_prec)
+
+ rclpy.spin(fall_detection_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ fall_detection_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/heart_anomaly_detection_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/heart_anomaly_detection_node.py
new file mode 100644
index 0000000000..7934c8ac19
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/heart_anomaly_detection_node.py
@@ -0,0 +1,123 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-_
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rclpy
+from rclpy.node import Node
+from vision_msgs.msg import Classification2D
+from std_msgs.msg import Float32MultiArray
+
+from opendr_bridge import ROS2Bridge
+from opendr.perception.heart_anomaly_detection import GatedRecurrentUnitLearner, AttentionNeuralBagOfFeatureLearner
+
+
+class HeartAnomalyNode(Node):
+
+ def __init__(self, input_ecg_topic="/ecg/ecg", output_heart_anomaly_topic="/opendr/heart_anomaly",
+ device="cuda", model="anbof"):
+ """
+ Creates a ROS2 Node for heart anomaly (atrial fibrillation) detection from ecg data
+ :param input_ecg_topic: Topic from which we are reading the input array data
+ :type input_ecg_topic: str
+ :param output_heart_anomaly_topic: Topic to which we are publishing the predicted class
+ :type output_heart_anomaly_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model: model to use: anbof or gru
+ :type model: str
+ """
+ super().__init__("opendr_heart_anomaly_detection_node")
+
+ self.publisher = self.create_publisher(Classification2D, output_heart_anomaly_topic, 1)
+
+ self.subscriber = self.create_subscription(Float32MultiArray, input_ecg_topic, self.callback, 1)
+
+ self.bridge = ROS2Bridge()
+
+ # AF dataset
+ self.channels = 1
+ self.series_length = 9000
+
+ if model == 'gru':
+ self.learner = GatedRecurrentUnitLearner(in_channels=self.channels, series_length=self.series_length,
+ n_class=4, device=device)
+ elif model == 'anbof':
+ self.learner = AttentionNeuralBagOfFeatureLearner(in_channels=self.channels, series_length=self.series_length,
+ n_class=4, device=device, attention_type='temporal')
+
+ self.learner.download(path='.', fold_idx=0)
+ self.learner.load(path='.')
+
+ self.get_logger().info("Heart anomaly detection node initialized.")
+
+ def callback(self, msg_data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param msg_data: input message
+ :type msg_data: std_msgs.msg.Float32MultiArray
+ """
+ # Convert Float32MultiArray to OpenDR Timeseries
+ data = self.bridge.from_rosarray_to_timeseries(msg_data, self.channels, self.series_length)
+
+ # Run ecg classification
+ class_pred = self.learner.infer(data)
+
+ # Publish results
+ ros_class = self.bridge.from_category_to_rosclass(class_pred, self.get_clock().now().to_msg())
+ self.publisher.publish(ros_class)
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_ecg_topic", type=str, default="/ecg/ecg",
+ help="listen to input ECG data on this topic")
+ parser.add_argument("-o", "--output_heart_anomaly_topic", type=str, default="/opendr/heart_anomaly",
+ help="Topic name for heart anomaly detection topic")
+ parser.add_argument("--model", type=str, default="anbof", help="model to be used for prediction: anbof or gru",
+ choices=["anbof", "gru"])
+ parser.add_argument("--device", type=str, default="cuda", help="Device to use (cpu, cuda)",
+ choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
+ except:
+ print("Using CPU")
+ device = "cpu"
+
+ heart_anomaly_detection_node = HeartAnomalyNode(input_ecg_topic=args.input_ecg_topic,
+ output_heart_anomaly_topic=args.output_heart_anomaly_topic,
+ model=args.model, device=device)
+
+ rclpy.spin(heart_anomaly_detection_node)
+
+ heart_anomaly_detection_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/hr_pose_estimation_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/hr_pose_estimation_node.py
new file mode 100644
index 0000000000..f8c6a1e30e
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/hr_pose_estimation_node.py
@@ -0,0 +1,173 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+from opendr_interface.msg import OpenDRPose2D
+
+from opendr.engine.data import Image
+from opendr.perception.pose_estimation import draw
+from opendr.perception.pose_estimation import HighResolutionPoseEstimationLearner
+
+
+class PoseEstimationNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_pose_annotated",
+ detections_topic="/opendr/poses", device="cuda",
+ num_refinement_stages=2, use_stride=False, half_precision=False):
+ """
+ Creates a ROS2 Node for pose estimation with Lightweight OpenPose.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no pose detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param num_refinement_stages: Specifies the number of pose estimation refinement stages are added on the
+ model's head, including the initial stage. Can be 0, 1 or 2, with more stages meaning slower and more accurate
+ inference
+ :type num_refinement_stages: int
+ :param use_stride: Whether to add a stride value in the model, which reduces accuracy but increases
+ inference speed
+ :type use_stride: bool
+ :param half_precision: Enables inference using half (fp16) precision instead of single (fp32) precision.
+ Valid only for GPU-based inference
+ :type half_precision: bool
+ """
+ super().__init__('opendr_pose_estimation_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.pose_publisher = self.create_publisher(OpenDRPose2D, detections_topic, 1)
+ else:
+ self.pose_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the high resolution pose estimation learner
+ self.pose_estimator = HighResolutionPoseEstimationLearner(device=device, num_refinement_stages=num_refinement_stages,
+ mobilenet_use_stride=use_stride,
+ half_precision=half_precision)
+ self.pose_estimator.download(path=".", verbose=True)
+ self.pose_estimator.load("openpose_default")
+
+ self.get_logger().info("Pose estimation node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run pose estimation
+ poses = self.pose_estimator.infer(image)
+
+ # Publish detections in ROS message
+ if self.pose_publisher is not None:
+ for pose in poses:
+ if pose.id is None: # Temporary fix for pose not having id
+ pose.id = -1
+ if self.pose_publisher is not None:
+ # Convert OpenDR pose to ROS2 pose message using bridge and publish it
+ self.pose_publisher.publish(self.bridge.to_ros_pose(pose))
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with poses
+ for pose in poses:
+ draw(image, pose)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image, if \"None\" "
+ "no output image is published",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_pose_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages, if \"None\" "
+ "no detection message is published",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/poses")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--accelerate", help="Enables acceleration flags (e.g., stride)", default=False,
+ action="store_true")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ if args.accelerate:
+ stride = True
+ stages = 0
+ half_prec = True
+ else:
+ stride = False
+ stages = 2
+ half_prec = False
+
+ pose_estimator_node = PoseEstimationNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ num_refinement_stages=stages, use_stride=stride, half_precision=half_prec)
+
+ rclpy.spin(pose_estimator_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ pose_estimator_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/image_dataset_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/image_dataset_node.py
new file mode 100644
index 0000000000..3587e37aef
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/image_dataset_node.py
@@ -0,0 +1,113 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import os
+import rclpy
+from rclpy.node import Node
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+from opendr.engine.datasets import DatasetIterator
+from opendr.perception.object_tracking_2d import MotDataset, RawMotDatasetIterator
+
+
+class ImageDatasetNode(Node):
+ def __init__(
+ self,
+ dataset: DatasetIterator,
+ output_rgb_image_topic="/opendr/dataset_image",
+ data_fps=10,
+ ):
+ """
+ Creates a ROS2 Node for publishing dataset images
+ """
+
+ super().__init__('opendr_image_dataset_node')
+
+ self.dataset = dataset
+ self.bridge = ROS2Bridge()
+ self.timer = self.create_timer(1.0 / data_fps, self.timer_callback)
+ self.sample_index = 0
+
+ self.output_image_publisher = self.create_publisher(
+ ROS_Image, output_rgb_image_topic, 1
+ )
+ self.get_logger().info("Publishing images.")
+
+ def timer_callback(self):
+ image = self.dataset[self.sample_index % len(self.dataset)][0]
+ # Dataset should have an (Image, Target) pair as elements
+
+ message = self.bridge.to_ros_image(
+ image, encoding="bgr8"
+ )
+ self.output_image_publisher.publish(message)
+
+ self.sample_index += 1
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-d", "--dataset_path", help="Path to a dataset",
+ type=str, default="MOT")
+ parser.add_argument(
+ "-ks", "--mot20_subsets_path", help="Path to mot20 subsets",
+ type=str, default=os.path.join(
+ "..", "..", "src", "opendr", "perception", "object_tracking_2d",
+ "datasets", "splits", "nano_mot20.train"
+ )
+ )
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name to publish the data",
+ type=str, default="/opendr/dataset_image")
+ parser.add_argument("-f", "--fps", help="Data FPS",
+ type=float, default=30)
+ args = parser.parse_args()
+
+ dataset_path = args.dataset_path
+ mot20_subsets_path = args.mot20_subsets_path
+ output_rgb_image_topic = args.output_rgb_image_topic
+ data_fps = args.fps
+
+ if not os.path.exists(dataset_path):
+ dataset_path = MotDataset.download_nano_mot20(
+ "MOT", True
+ ).path
+
+ dataset = RawMotDatasetIterator(
+ dataset_path,
+ {
+ "mot20": mot20_subsets_path
+ },
+ scan_labels=False
+ )
+ dataset_node = ImageDatasetNode(
+ dataset,
+ output_rgb_image_topic=output_rgb_image_topic,
+ data_fps=data_fps,
+ )
+
+ rclpy.spin(dataset_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ dataset_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/landmark_based_facial_expression_recognition_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/landmark_based_facial_expression_recognition_node.py
new file mode 100644
index 0000000000..cb43293f19
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/landmark_based_facial_expression_recognition_node.py
@@ -0,0 +1,184 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+import numpy as np
+
+import rclpy
+from rclpy.node import Node
+from std_msgs.msg import String
+from vision_msgs.msg import ObjectHypothesis
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+
+from opendr.perception.facial_expression_recognition import ProgressiveSpatioTemporalBLNLearner
+from opendr.perception.facial_expression_recognition import landmark_extractor
+from opendr.perception.facial_expression_recognition import gen_muscle_data
+from opendr.perception.facial_expression_recognition import data_normalization
+
+
+class LandmarkFacialExpressionRecognitionNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw",
+ output_category_topic="/opendr/landmark_expression_recognition",
+ output_category_description_topic="/opendr/landmark_expression_recognition_description",
+ device="cpu", model='pstbln_afew', shape_predictor='./predictor_path'):
+ """
+ Creates a ROS2 Node for landmark-based facial expression recognition.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_category_topic: Topic to which we are publishing the recognized facial expression category info
+ (if None, we are not publishing the info)
+ :type output_category_topic: str
+ :param output_category_description_topic: Topic to which we are publishing the description of the recognized
+ facial expression (if None, we are not publishing the description)
+ :type output_category_description_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model: model to use for landmark-based facial expression recognition.
+ (Options: 'pstbln_ck+', 'pstbln_casia', 'pstbln_afew')
+ :type model: str
+ :param shape_predictor: pretrained model to use for landmark extraction from a facial image
+ :type model: str
+ """
+ super().__init__('opendr_landmark_based_facial_expression_recognition_node')
+ # Set up ROS topics and bridge
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_category_topic is not None:
+ self.hypothesis_publisher = self.create_publisher(ObjectHypothesis, output_category_topic, 1)
+ else:
+ self.hypothesis_publisher = None
+
+ if output_category_description_topic is not None:
+ self.string_publisher = self.create_publisher(String, output_category_description_topic, 1)
+ else:
+ self.string_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the landmark-based facial expression recognition
+ if model == 'pstbln_ck+':
+ num_point = 303
+ num_class = 7
+ elif model == 'pstbln_casia':
+ num_point = 309
+ num_class = 6
+ elif model == 'pstbln_afew':
+ num_point = 312
+ num_class = 7
+ self.model_name, self.dataset_name = model.split("_")
+ self.expression_classifier = ProgressiveSpatioTemporalBLNLearner(device=device, dataset_name=self.dataset_name,
+ num_class=num_class, num_point=num_point,
+ num_person=1, in_channels=2,
+ blocksize=5, topology=[15, 10, 15, 5, 5, 10])
+ model_saved_path = "./pretrained_models/" + model
+ self.expression_classifier.load(model_saved_path, model)
+ self.shape_predictor = shape_predictor
+
+ self.get_logger().info("landmark-based facial expression recognition node started!")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+ landmarks = landmark_extractor(image, './landmarks.npy', self.shape_predictor)
+
+ # 3: sequence numpy data generation from extracted landmarks and normalization:
+
+ numpy_data = _landmark2numpy(landmarks)
+ norm_data = data_normalization(numpy_data)
+ muscle_data = gen_muscle_data(norm_data, './muscle_data')
+
+ # Run expression recognition
+ category = self.expression_classifier.infer(muscle_data)
+
+ if self.hypothesis_publisher is not None:
+ self.hypothesis_publisher.publish(self.bridge.to_ros_category(category))
+
+ if self.string_publisher is not None:
+ self.string_publisher.publish(self.bridge.to_ros_category_description(category))
+
+
+def _landmark2numpy(landmarks):
+ num_landmarks = 68
+ num_dim = 2 # feature dimension for each facial landmark
+ num_faces = 1 # number of faces in each frame
+ num_frames = 15
+ numpy_data = np.zeros((1, num_dim, num_frames, num_landmarks, num_faces))
+ for t in range(num_frames):
+ numpy_data[0, 0:num_dim, t, :, 0] = landmarks
+ return numpy_data
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_category_topic", help="Topic name for output recognized category",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/landmark_expression_recognition")
+ parser.add_argument("-d", "--output_category_description_topic", help="Topic name for category description",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/landmark_expression_recognition_description")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model to use, either 'pstbln_ck+', 'pstbln_casia', 'pstbln_afew'",
+ type=str, default="pstbln_afew", choices=['pstbln_ck+', 'pstbln_casia', 'pstbln_afew'])
+ parser.add_argument("-s", "--shape_predictor", help="Shape predictor (landmark_extractor) to use",
+ type=str, default='./predictor_path')
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ landmark_expression_estimation_node = \
+ LandmarkFacialExpressionRecognitionNode(
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_category_topic=args.output_category_topic,
+ output_category_description_topic=args.output_category_description_topic,
+ device=device, model=args.model,
+ shape_predictor=args.shape_predictor)
+
+ rclpy.spin(landmark_expression_estimation_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ landmark_expression_estimation_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_centernet_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_centernet_node.py
new file mode 100644
index 0000000000..e0ba51c629
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_centernet_node.py
@@ -0,0 +1,143 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import CenterNetDetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionCenterNetNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_objects_annotated",
+ detections_topic="/opendr/objects", device="cuda", backbone="resnet50_v1b"):
+ """
+ Creates a ROS2 Node for object detection with Centernet.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: backbone network
+ :type backbone: str
+ """
+ super().__init__('opendr_object_detection_2d_centernet_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ self.object_detector = CenterNetDetectorLearner(backbone=backbone, device=device)
+ self.object_detector.download(path=".", verbose=True)
+ self.object_detector.load("centernet_default")
+
+ self.get_logger().info("Object Detection 2D Centernet node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.45, keep_size=False)
+
+ if self.object_publisher is not None:
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to \"resnet50_v1b\"",
+ type=str, default="resnet50_v1b", choices=["resnet50_v1b"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_centernet_node = ObjectDetectionCenterNetNode(device=device, backbone=args.backbone,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+
+ rclpy.spin(object_detection_centernet_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ object_detection_centernet_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py
new file mode 100644
index 0000000000..154dabf79f
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py
@@ -0,0 +1,242 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+import numpy as np
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import DetrLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionDetrNode(Node):
+ def __init__(
+ self,
+ input_rgb_image_topic="image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated",
+ detections_topic="/opendr/objects",
+ device="cuda",
+ ):
+ """
+ Creates a ROS2 Node for object detection with DETR.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ super().__init__("opendr_object_detection_2d_detr_node")
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.detection_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.detection_publisher = None
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ self.bridge = ROS2Bridge()
+
+ self.class_names = [
+ "N/A",
+ "person",
+ "bicycle",
+ "car",
+ "motorcycle",
+ "airplane",
+ "bus",
+ "train",
+ "truck",
+ "boat",
+ "traffic light",
+ "fire hydrant",
+ "N/A",
+ "stop sign",
+ "parking meter",
+ "bench",
+ "bird",
+ "cat",
+ "dog",
+ "horse",
+ "sheep",
+ "cow",
+ "elephant",
+ "bear",
+ "zebra",
+ "giraffe",
+ "N/A",
+ "backpack",
+ "umbrella",
+ "N/A",
+ "N/A",
+ "handbag",
+ "tie",
+ "suitcase",
+ "frisbee",
+ "skis",
+ "snowboard",
+ "sports ball",
+ "kite",
+ "baseball bat",
+ "baseball glove",
+ "skateboard",
+ "surfboard",
+ "tennis racket",
+ "bottle",
+ "N/A",
+ "wine glass",
+ "cup",
+ "fork",
+ "knife",
+ "spoon",
+ "bowl",
+ "banana",
+ "apple",
+ "sandwich",
+ "orange",
+ "broccoli",
+ "carrot",
+ "hot dog",
+ "pizza",
+ "donut",
+ "cake",
+ "chair",
+ "couch",
+ "potted plant",
+ "bed",
+ "N/A",
+ "dining table",
+ "N/A",
+ "N/A",
+ "toilet",
+ "N/A",
+ "tv",
+ "laptop",
+ "mouse",
+ "remote",
+ "keyboard",
+ "cell phone",
+ "microwave",
+ "oven",
+ "toaster",
+ "sink",
+ "refrigerator",
+ "N/A",
+ "book",
+ "clock",
+ "vase",
+ "scissors",
+ "teddy bear",
+ "hair drier",
+ "toothbrush",
+ ]
+
+ # Initialize the detection estimation
+ self.object_detector = DetrLearner(device=device)
+ self.object_detector.download(path=".", verbose=True)
+
+ self.get_logger().info("Object Detection 2D DETR node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding="bgr8")
+
+ # Run detection estimation
+ boxes = self.object_detector.infer(image)
+
+ # Annotate image and publish results:
+ if self.detection_publisher is not None:
+ ros_detection = self.bridge.to_ros_bounding_box_list(boxes)
+ self.detection_publisher.publish(ros_detection)
+ # We get can the data back using self.bridge.from_ros_bounding_box_list(ros_detection)
+ # e.g., opendr_detection = self.bridge.from_ros_bounding_box_list(ros_detection)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = np.float32(image.opencv())
+ image = draw_bounding_boxes(image, boxes, class_names=self.class_names)
+ message = self.bridge.to_ros_image(Image(image), encoding="bgr8")
+ self.image_publisher.publish(message)
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_detr_node = ObjectDetectionDetrNode(
+ device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ )
+
+ rclpy.spin(object_detection_detr_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ object_detection_detr_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_gem_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_gem_node.py
new file mode 100644
index 0000000000..9f0b3b9760
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_gem_node.py
@@ -0,0 +1,282 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import argparse
+import cv2
+import message_filters
+import numpy as np
+import rclpy
+import torch
+from rclpy.node import Node
+from opendr_bridge import ROS2Bridge
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import GemLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionGemNode(Node):
+ def __init__(
+ self,
+ input_rgb_image_topic="/camera/color/image_raw",
+ input_infra_image_topic="/camera/infra/image_raw",
+ output_rgb_image_topic="/opendr/rgb_image_objects_annotated",
+ output_infra_image_topic="/opendr/infra_image_objects_annotated",
+ detections_topic="/opendr/objects",
+ device="cuda",
+ pts_rgb=None,
+ pts_infra=None,
+ ):
+ """
+ Creates a ROS2 Node for object detection with GEM
+ :param input_rgb_image_topic: Topic from which we are reading the input rgb image
+ :type input_rgb_image_topic: str
+ :param input_infra_image_topic: Topic from which we are reading the input infrared image
+ :type: input_infra_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated rgb image (if None, we are not
+ publishing annotated image)
+ :type output_rgb_image_topic: str
+ :param output_infra_image_topic: Topic to which we are publishing the annotated infrared image (if None, we are not
+ publishing annotated image)
+ :type output_infra_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, we are
+ not publishing annotations)
+ :type detections_topic: str
+ :param device: Device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param pts_rgb: Point on the rgb image that define alignment with the infrared image. These are camera
+ specific and can be obtained using get_color_infra_alignment.py which is located in the
+ opendr/perception/object_detection2d/utils module.
+ :type pts_rgb: {list, numpy.ndarray}
+ :param pts_infra: Points on the infrared image that define alignment with rgb image. These are camera specific
+ and can be obtained using get_color_infra_alignment.py which is located in the
+ opendr/perception/object_detection2d/utils module.
+ :type pts_infra: {list, numpy.ndarray}
+ """
+ super().__init__("opendr_object_detection_2d_gem_node")
+
+ if output_rgb_image_topic is not None:
+ self.rgb_publisher = self.create_publisher(msg_type=ROS_Image, topic=output_rgb_image_topic, qos_profile=10)
+ else:
+ self.rgb_publisher = None
+ if output_infra_image_topic is not None:
+ self.ir_publisher = self.create_publisher(msg_type=ROS_Image, topic=output_infra_image_topic, qos_profile=10)
+ else:
+ self.ir_publisher = None
+
+ if detections_topic is not None:
+ self.detection_publisher = self.create_publisher(msg_type=Detection2DArray, topic=detections_topic, qos_profile=10)
+ else:
+ self.detection_publisher = None
+ if pts_infra is None:
+ pts_infra = np.array(
+ [
+ [478, 248],
+ [465, 338],
+ [458, 325],
+ [468, 256],
+ [341, 240],
+ [335, 310],
+ [324, 321],
+ [311, 383],
+ [434, 365],
+ [135, 384],
+ [67, 257],
+ [167, 206],
+ [124, 131],
+ [364, 276],
+ [424, 269],
+ [277, 131],
+ [41, 310],
+ [202, 320],
+ [188, 318],
+ [188, 308],
+ [196, 241],
+ [499, 317],
+ [311, 164],
+ [220, 216],
+ [435, 352],
+ [213, 363],
+ [390, 364],
+ [212, 368],
+ [390, 370],
+ [467, 324],
+ [415, 364],
+ ]
+ )
+ self.get_logger().warn(
+ "\nUsing default calibration values for pts_infra!" +
+ "\nThese are probably incorrect." +
+ "\nThe correct values for pts_infra can be found by running get_rgb_infra_alignment.py." +
+ "\nThis file is located in the opendr/perception/object_detection2d/utils module."
+ )
+ if pts_rgb is None:
+ pts_rgb = np.array(
+ [
+ [910, 397],
+ [889, 572],
+ [874, 552],
+ [891, 411],
+ [635, 385],
+ [619, 525],
+ [603, 544],
+ [576, 682],
+ [810, 619],
+ [216, 688],
+ [90, 423],
+ [281, 310],
+ [193, 163],
+ [684, 449],
+ [806, 431],
+ [504, 170],
+ [24, 538],
+ [353, 552],
+ [323, 550],
+ [323, 529],
+ [344, 387],
+ [961, 533],
+ [570, 233],
+ [392, 336],
+ [831, 610],
+ [378, 638],
+ [742, 630],
+ [378, 648],
+ [742, 640],
+ [895, 550],
+ [787, 630],
+ ]
+ )
+ self.get_logger().warn(
+ "\nUsing default calibration values for pts_rgb!" +
+ "\nThese are probably incorrect." +
+ "\nThe correct values for pts_rgb can be found by running get_color_infra_alignment.py." +
+ "\nThis file is located in the opendr/perception/object_detection2d/utils module."
+ )
+ # Object classes
+ self.classes = ["N/A", "chair", "cycle", "bin", "laptop", "drill", "rocker"]
+
+ # Estimating Homography matrix for aligning infra with rgb
+ self.h, status = cv2.findHomography(pts_infra, pts_rgb)
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the detection estimation
+ model_backbone = "resnet50"
+
+ self.gem_learner = GemLearner(
+ backbone=model_backbone,
+ num_classes=7,
+ device=device,
+ )
+ self.gem_learner.fusion_method = "sc_avg"
+ self.gem_learner.download(path=".", verbose=True)
+
+ # Subscribers
+ msg_rgb = message_filters.Subscriber(self, ROS_Image, input_rgb_image_topic, 1)
+ msg_ir = message_filters.Subscriber(self, ROS_Image, input_infra_image_topic, 1)
+
+ sync = message_filters.TimeSynchronizer([msg_rgb, msg_ir], 1)
+ sync.registerCallback(self.callback)
+
+ def callback(self, msg_rgb, msg_ir):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param msg_rgb: input rgb image message
+ :type msg_rgb: sensor_msgs.msg.Image
+ :param msg_ir: input infrared image message
+ :type msg_ir: sensor_msgs.msg.Image
+ """
+ # Convert images to OpenDR standard
+ image_rgb = self.bridge.from_ros_image(msg_rgb).opencv()
+ image_ir_raw = self.bridge.from_ros_image(msg_ir, "bgr8").opencv()
+ image_ir = cv2.warpPerspective(image_ir_raw, self.h, (image_rgb.shape[1], image_rgb.shape[0]))
+
+ # Perform inference on images
+ boxes, w_sensor1, _ = self.gem_learner.infer(image_rgb, image_ir)
+
+ # Annotate image and publish results:
+ if self.detection_publisher is not None:
+ ros_detection = self.bridge.to_ros_bounding_box_list(boxes)
+ self.detection_publisher.publish(ros_detection)
+ # We can get the data back using self.bridge.from_ros_bounding_box_list(ros_detection)
+ # e.g., opendr_detection = self.bridge.from_ros_bounding_box_list(ros_detection)
+
+ if self.rgb_publisher is not None:
+ plot_rgb = draw_bounding_boxes(image_rgb, boxes, class_names=self.classes)
+ message = self.bridge.to_ros_image(Image(np.uint8(plot_rgb)))
+ self.rgb_publisher.publish(message)
+ if self.ir_publisher is not None:
+ plot_ir = draw_bounding_boxes(image_ir, boxes, class_names=self.classes)
+ message = self.bridge.to_ros_image(Image(np.uint8(plot_ir)))
+ self.ir_publisher.publish(message)
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-ic", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/camera/color/image_raw")
+ parser.add_argument("-ii", "--input_infra_image_topic", help="Topic name for input infrared image",
+ type=str, default="/camera/infra/image_raw")
+ parser.add_argument("-oc", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/rgb_image_objects_annotated")
+ parser.add_argument("-oi", "--output_infra_image_topic", help="Topic name for output annotated infrared image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/infra_image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help='Device to use, either "cpu" or "cuda", defaults to "cuda"',
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ gem_node = ObjectDetectionGemNode(
+ device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ input_infra_image_topic=args.input_infra_image_topic,
+ output_infra_image_topic=args.output_infra_image_topic,
+ detections_topic=args.detections_topic,
+ )
+
+ rclpy.spin(gem_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ gem_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_nanodet_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_nanodet_node.py
new file mode 100755
index 0000000000..31902c032e
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_nanodet_node.py
@@ -0,0 +1,144 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import NanodetLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionNanodetNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_objects_annotated",
+ detections_topic="/opendr/objects", device="cuda", model="plus_m_1.5x_416"):
+ """
+ Creates a ROS2 Node for object detection with Nanodet.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model: the name of the model of which we want to load the config file
+ :type model: str
+ """
+ super().__init__('object_detection_2d_nanodet_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the object detector
+ self.object_detector = NanodetLearner(model_to_use=model, device=device)
+ self.object_detector.download(path=".", mode="pretrained", verbose=True)
+ self.object_detector.load("./nanodet_{}".format(model))
+
+ self.get_logger().info("Object Detection 2D Nanodet node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.35)
+
+ # Get an OpenCV image back
+ image = image.opencv()
+
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ if self.object_publisher is not None:
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model that config file will be used", type=str, default="plus_m_1.5x_416")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_nanodet_node = ObjectDetectionNanodetNode(device=device, model=args.model,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+
+ rclpy.spin(object_detection_nanodet_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ object_detection_nanodet_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_ssd_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_ssd_node.py
new file mode 100644
index 0000000000..acf4bce4a4
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_ssd_node.py
@@ -0,0 +1,170 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import SingleShotDetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+from opendr.perception.object_detection_2d import Seq2SeqNMSLearner, SoftNMS, FastNMS, ClusterNMS
+
+
+class ObjectDetectionSSDNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_objects_annotated",
+ detections_topic="/opendr/objects", device="cuda", backbone="vgg16_atrous", nms_type='default'):
+ """
+ Creates a ROS2 Node for object detection with SSD.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
+ annotated image)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: backbone network
+ :type backbone: str
+ :param nms_type: type of NMS method, can be one
+ of 'default', 'seq2seq-nms', 'soft-nms', 'fast-nms', 'cluster-nms'
+ :type nms_type: str
+ """
+ super().__init__('opendr_object_detection_2d_ssd_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ self.object_detector = SingleShotDetectorLearner(backbone=backbone, device=device)
+ self.object_detector.download(path=".", verbose=True)
+ self.object_detector.load("ssd_default_person")
+ self.custom_nms = None
+
+ # Initialize NMS if selected
+ if nms_type == 'seq2seq-nms':
+ self.custom_nms = Seq2SeqNMSLearner(fmod_map_type='EDGEMAP', iou_filtering=0.8,
+ app_feats='fmod', device=device)
+ self.custom_nms.download(model_name='seq2seq_pets_jpd_fmod', path='.')
+ self.custom_nms.load('./seq2seq_pets_jpd_fmod/', verbose=True)
+ self.get_logger().info("Object Detection 2D SSD node seq2seq-nms initialized.")
+ elif nms_type == 'soft-nms':
+ self.custom_nms = SoftNMS(nms_thres=0.45, device=device)
+ self.get_logger().info("Object Detection 2D SSD node soft-nms initialized.")
+ elif nms_type == 'fast-nms':
+ self.custom_nms = FastNMS(device=device)
+ self.get_logger().info("Object Detection 2D SSD node fast-nms initialized.")
+ elif nms_type == 'cluster-nms':
+ self.custom_nms = ClusterNMS(device=device)
+ self.get_logger().info("Object Detection 2D SSD node cluster-nms initialized.")
+ else:
+ self.get_logger().info("Object Detection 2D SSD node using default NMS.")
+
+ self.get_logger().info("Object Detection 2D SSD node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.45, keep_size=False, custom_nms=self.custom_nms)
+
+ if self.object_publisher is not None:
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None, default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to vgg16_atrous",
+ type=str, default="vgg16_atrous", choices=["vgg16_atrous"])
+ parser.add_argument("--nms_type", help="Non-Maximum Suppression type, defaults to \"default\", options are "
+ "\"seq2seq-nms\", \"soft-nms\", \"fast-nms\", \"cluster-nms\"",
+ type=str, default="default",
+ choices=["default", "seq2seq-nms", "soft-nms", "fast-nms", "cluster-nms"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_ssd_node = ObjectDetectionSSDNode(device=device, backbone=args.backbone, nms_type=args.nms_type,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+
+ rclpy.spin(object_detection_ssd_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ object_detection_ssd_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_yolov3_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_yolov3_node.py
new file mode 100644
index 0000000000..43bd7aab03
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_yolov3_node.py
@@ -0,0 +1,143 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import YOLOv3DetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionYOLOV3Node(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_objects_annotated",
+ detections_topic="/opendr/objects", device="cuda", backbone="darknet53"):
+ """
+ Creates a ROS2 Node for object detection with YOLOV3
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param backbone: backbone network
+ :type backbone: str
+ """
+ super().__init__('object_detection_2d_yolov3_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ self.object_detector = YOLOv3DetectorLearner(backbone=backbone, device=device)
+ self.object_detector.download(path=".", verbose=True)
+ self.object_detector.load("yolo_default")
+
+ self.get_logger().info("Object Detection 2D YOLOV3 node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image, threshold=0.1, keep_size=False)
+
+ if self.object_publisher is not None:
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_bounding_box_list(boxes) # Convert to ROS bounding_box_list
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None, default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--backbone", help="Backbone network, defaults to \"darknet53\"",
+ type=str, default="darknet53", choices=["darknet53"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_yolov3_node = ObjectDetectionYOLOV3Node(device=device, backbone=args.backbone,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+
+ rclpy.spin(object_detection_yolov3_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ object_detection_yolov3_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_yolov5_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_yolov5_node.py
new file mode 100644
index 0000000000..e80d0e34a4
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_yolov5_node.py
@@ -0,0 +1,142 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2DArray
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import YOLOv5DetectorLearner
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+class ObjectDetectionYOLOV5Node(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_objects_annotated",
+ detections_topic="/opendr/objects", device="cuda", model="yolov5s"):
+ """
+ Creates a ROS2 Node for object detection with YOLOV5.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no object detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model: model to use
+ :type model: str
+ """
+ super().__init__('object_detection_2d_yolov5_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.object_publisher = self.create_publisher(Detection2DArray, detections_topic, 1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ self.object_detector = YOLOv5DetectorLearner(model_name=model, device=device)
+
+ self.get_logger().info("Object Detection 2D YOLOV5 node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run object detection
+ boxes = self.object_detector.infer(image)
+
+ if self.object_publisher is not None:
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_bounding_box_list(boxes) # Convert to ROS bounding_box_list
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with object detection boxes
+ image = draw_bounding_boxes(image, boxes, class_names=self.object_detector.classes, line_thickness=3)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
+ type=lambda value: value if value.lower() != "none" else None, default="/opendr/objects")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model to use, defaults to \"yolov5s\"", type=str, default="yolov5s",
+ choices=['yolov5s', 'yolov5n', 'yolov5m', 'yolov5l', 'yolov5x',
+ 'yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'custom'])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detection_yolov5_node = ObjectDetectionYOLOV5Node(device=device, model=args.model,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic)
+
+ rclpy.spin(object_detection_yolov5_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ object_detection_yolov5_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_3d_voxel_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_3d_voxel_node.py
new file mode 100644
index 0000000000..4c3b883905
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_3d_voxel_node.py
@@ -0,0 +1,151 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import torch
+import argparse
+import os
+import rclpy
+from rclpy.node import Node
+from vision_msgs.msg import Detection3DArray
+from sensor_msgs.msg import PointCloud as ROS_PointCloud
+from opendr_bridge import ROS2Bridge
+from opendr.perception.object_detection_3d import VoxelObjectDetection3DLearner
+
+
+class ObjectDetection3DVoxelNode(Node):
+ def __init__(
+ self,
+ input_point_cloud_topic="/opendr/dataset_point_cloud",
+ detections_topic="/opendr/objects3d",
+ device="cuda:0",
+ model_name="tanet_car_xyres_16",
+ model_config_path=os.path.join(
+ "$OPENDR_HOME", "src", "opendr", "perception", "object_detection_3d",
+ "voxel_object_detection_3d", "second_detector", "configs", "tanet",
+ "ped_cycle", "test_short.proto"
+ ),
+ temp_dir="temp",
+ ):
+ """
+ Creates a ROS2 Node for 3D object detection
+ :param input_point_cloud_topic: Topic from which we are reading the input point cloud
+ :type input_point_cloud_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model_name: the pretrained model to download or a trained model in temp_dir
+ :type model_name: str
+ :param temp_dir: where to store models
+ :type temp_dir: str
+ """
+
+ super().__init__('opendr_object_detection_3d_voxel_node')
+
+ self.get_logger().info("Using model_name: {}".format(model_name))
+
+ self.learner = VoxelObjectDetection3DLearner(
+ device=device, temp_path=temp_dir, model_config_path=model_config_path
+ )
+ if not os.path.exists(os.path.join(temp_dir, model_name)):
+ VoxelObjectDetection3DLearner.download(model_name, temp_dir)
+
+ self.learner.load(os.path.join(temp_dir, model_name), verbose=True)
+
+ # Initialize OpenDR ROSBridge object
+ self.bridge = ROS2Bridge()
+
+ self.detection_publisher = self.create_publisher(
+ Detection3DArray, detections_topic, 1
+ )
+
+ self.create_subscription(ROS_PointCloud, input_point_cloud_topic, self.callback, 1)
+
+ self.get_logger().info("Object Detection 3D Voxel Node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ point_cloud = self.bridge.from_ros_point_cloud(data)
+ detection_boxes = self.learner.infer(point_cloud)
+
+ # Convert detected boxes to ROS type and publish
+ ros_boxes = self.bridge.to_ros_boxes_3d(detection_boxes)
+ self.detection_publisher.publish(ros_boxes)
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_point_cloud_topic",
+ help="Point Cloud topic provided by either a point_cloud_dataset_node or any other 3D Point Cloud Node",
+ type=str, default="/opendr/dataset_point_cloud")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=str, default="/opendr/objects3d")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-n", "--model_name", help="Name of the trained model",
+ type=str, default="tanet_car_xyres_16")
+ parser.add_argument(
+ "-c", "--model_config_path", help="Path to a model .proto config",
+ type=str, default=os.path.join(
+ "$OPENDR_HOME", "src", "opendr", "perception", "object_detection_3d",
+ "voxel_object_detection_3d", "second_detector", "configs", "tanet",
+ "car", "xyres_16.proto"
+ )
+ )
+ parser.add_argument("-t", "--temp_dir", help="Path to a temp dir with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ voxel_node = ObjectDetection3DVoxelNode(
+ device=device,
+ model_name=args.model_name,
+ model_config_path=args.model_config_path,
+ input_point_cloud_topic=args.input_point_cloud_topic,
+ temp_dir=args.temp_dir,
+ detections_topic=args.detections_topic,
+ )
+
+ rclpy.spin(voxel_node)
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ voxel_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_deep_sort_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_deep_sort_node.py
new file mode 100644
index 0000000000..30b83a8b75
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_deep_sort_node.py
@@ -0,0 +1,247 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import torch
+import argparse
+import cv2
+import os
+from opendr.engine.target import TrackingAnnotationList
+import rclpy
+from rclpy.node import Node
+from vision_msgs.msg import Detection2DArray
+from std_msgs.msg import Int32MultiArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+from opendr.perception.object_tracking_2d import (
+ ObjectTracking2DDeepSortLearner,
+ ObjectTracking2DFairMotLearner
+)
+from opendr.engine.data import Image, ImageWithDetections
+
+
+class ObjectTracking2DDeepSortNode(Node):
+ def __init__(
+ self,
+ detector=None,
+ input_rgb_image_topic="image_raw",
+ output_detection_topic="/opendr/objects",
+ output_tracking_id_topic="/opendr/objects_tracking_id",
+ output_rgb_image_topic="/opendr/image_objects_annotated",
+ device="cuda:0",
+ model_name="deep_sort",
+ temp_dir="temp",
+ ):
+ """
+ Creates a ROS2 Node for 2D object tracking
+ :param detector: Learner to generate object detections
+ :type detector: Learner
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
+ annotated image)
+ :type output_rgb_image_topic: str
+ :param output_detection_topic: Topic to which we are publishing the detections
+ :type output_detection_topic: str
+ :param output_tracking_id_topic: Topic to which we are publishing the tracking ids
+ :type output_tracking_id_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model_name: the pretrained model to download or a saved model in temp_dir folder to use
+ :type model_name: str
+ :param temp_dir: the folder to download models
+ :type temp_dir: str
+ """
+
+ super().__init__('opendr_object_tracking_2d_deep_sort_node')
+
+ self.get_logger().info("Using model_name: {}".format(model_name))
+
+ self.detector = detector
+ self.learner = ObjectTracking2DDeepSortLearner(
+ device=device, temp_path=temp_dir,
+ )
+ if not os.path.exists(os.path.join(temp_dir, model_name)):
+ ObjectTracking2DDeepSortLearner.download(model_name, temp_dir)
+
+ self.learner.load(os.path.join(temp_dir, model_name), verbose=True)
+
+ self.bridge = ROS2Bridge()
+
+ if output_tracking_id_topic is not None:
+ self.tracking_id_publisher = self.create_publisher(
+ Int32MultiArray, output_tracking_id_topic, 1
+ )
+
+ if output_rgb_image_topic is not None:
+ self.output_image_publisher = self.create_publisher(
+ ROS_Image, output_rgb_image_topic, 1
+ )
+
+ if output_detection_topic is not None:
+ self.detection_publisher = self.create_publisher(
+ Detection2DArray, output_detection_topic, 1
+ )
+
+ self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding="bgr8")
+ detection_boxes = self.detector.infer(image)
+ image_with_detections = ImageWithDetections(image.numpy(), detection_boxes)
+ tracking_boxes = self.learner.infer(image_with_detections, swap_left_top=False)
+
+ if self.output_image_publisher is not None:
+ frame = image.opencv()
+ draw_predictions(frame, tracking_boxes)
+ message = self.bridge.to_ros_image(
+ Image(frame), encoding="bgr8"
+ )
+ self.output_image_publisher.publish(message)
+ self.get_logger().info("Published annotated image")
+
+ if self.detection_publisher is not None:
+ detection_boxes = tracking_boxes.bounding_box_list()
+ ros_boxes = self.bridge.to_ros_boxes(detection_boxes)
+ self.detection_publisher.publish(ros_boxes)
+ self.get_logger().info("Published " + str(len(detection_boxes)) + " detection boxes")
+
+ if self.tracking_id_publisher is not None:
+ ids = [int(tracking_box.id) for tracking_box in tracking_boxes]
+ ros_ids = Int32MultiArray()
+ ros_ids.data = ids
+ self.tracking_id_publisher.publish(ros_ids)
+ self.get_logger().info("Published " + str(len(ids)) + " tracking ids")
+
+
+colors = [
+ (255, 0, 255),
+ (0, 0, 255),
+ (0, 255, 0),
+ (255, 0, 0),
+ (35, 69, 55),
+ (43, 63, 54),
+]
+
+
+def draw_predictions(frame, predictions: TrackingAnnotationList, is_centered=False, is_flipped_xy=True):
+ global colors
+ w, h, _ = frame.shape
+
+ for prediction in predictions.boxes:
+ prediction = prediction
+
+ if not hasattr(prediction, "id"):
+ prediction.id = 0
+
+ color = colors[int(prediction.id) * 7 % len(colors)]
+
+ x = prediction.left
+ y = prediction.top
+
+ if is_flipped_xy:
+ x = prediction.top
+ y = prediction.left
+
+ if is_centered:
+ x -= prediction.width
+ y -= prediction.height
+
+ cv2.rectangle(
+ frame,
+ (int(x), int(y)),
+ (
+ int(x + prediction.width),
+ int(y + prediction.height),
+ ),
+ color,
+ 2,
+ )
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic",
+ help="Input Image topic provided by either an image_dataset_node, webcam or any other image node",
+ type=str, default="/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic",
+ help="Output annotated image topic with a visualization of detections and their ids",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("-t", "--tracking_id_topic",
+ help="Output tracking ids topic with the same element count as in output_detection_topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects_tracking_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-n", "--model_name", help="Name of the trained model",
+ type=str, default="deep_sort", choices=["deep_sort"])
+ parser.add_argument("-td", "--temp_dir", help="Path to a temporary directory with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ detection_learner = ObjectTracking2DFairMotLearner(
+ device=device, temp_path=args.temp_dir,
+ )
+ if not os.path.exists(os.path.join(args.temp_dir, "fairmot_dla34")):
+ ObjectTracking2DFairMotLearner.download("fairmot_dla34", args.temp_dir)
+
+ detection_learner.load(os.path.join(args.temp_dir, "fairmot_dla34"), verbose=True)
+
+ deep_sort_node = ObjectTracking2DDeepSortNode(
+ detector=detection_learner,
+ device=device,
+ model_name=args.model_name,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ temp_dir=args.temp_dir,
+ output_detection_topic=args.detections_topic,
+ output_tracking_id_topic=args.tracking_id_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ )
+ rclpy.spin(deep_sort_node)
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ deep_sort_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_fair_mot_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_fair_mot_node.py
new file mode 100755
index 0000000000..bcd30f68ac
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_fair_mot_node.py
@@ -0,0 +1,231 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import torch
+import argparse
+import cv2
+import os
+from opendr.engine.target import TrackingAnnotationList
+import rclpy
+from rclpy.node import Node
+from vision_msgs.msg import Detection2DArray
+from std_msgs.msg import Int32MultiArray
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+from opendr.perception.object_tracking_2d import (
+ ObjectTracking2DFairMotLearner,
+)
+from opendr.engine.data import Image
+
+
+class ObjectTracking2DFairMotNode(Node):
+ def __init__(
+ self,
+ input_rgb_image_topic="image_raw",
+ output_rgb_image_topic="/opendr/image_objects_annotated",
+ output_detection_topic="/opendr/objects",
+ output_tracking_id_topic="/opendr/objects_tracking_id",
+ device="cuda:0",
+ model_name="fairmot_dla34",
+ temp_dir="temp",
+ ):
+ """
+ Creates a ROS2 Node for 2D object tracking
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
+ annotated image)
+ :type output_rgb_image_topic: str
+ :param output_detection_topic: Topic to which we are publishing the detections
+ :type output_detection_topic: str
+ :param output_tracking_id_topic: Topic to which we are publishing the tracking ids
+ :type output_tracking_id_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model_name: the pretrained model to download or a saved model in temp_dir folder to use
+ :type model_name: str
+ :param temp_dir: the folder to download models
+ :type temp_dir: str
+ """
+
+ super().__init__('opendr_object_tracking_2d_fair_mot_node')
+
+ self.learner = ObjectTracking2DFairMotLearner(
+ device=device, temp_path=temp_dir,
+ )
+ if not os.path.exists(os.path.join(temp_dir, model_name)):
+ ObjectTracking2DFairMotLearner.download(model_name, temp_dir)
+
+ self.learner.load(os.path.join(temp_dir, model_name), verbose=True)
+
+ # Initialize OpenDR ROSBridge object
+ self.bridge = ROS2Bridge()
+
+ if output_detection_topic is not None:
+ self.detection_publisher = self.create_publisher(
+ Detection2DArray, output_detection_topic, 1
+ )
+
+ if output_tracking_id_topic is not None:
+ self.tracking_id_publisher = self.create_publisher(
+ Int32MultiArray, output_tracking_id_topic, 1
+ )
+
+ if output_rgb_image_topic is not None:
+ self.output_image_publisher = self.create_publisher(
+ ROS_Image, output_rgb_image_topic, 1
+ )
+
+ self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding="bgr8")
+ tracking_boxes = self.learner.infer(image)
+
+ if self.output_image_publisher is not None:
+ frame = image.opencv()
+ draw_predictions(frame, tracking_boxes)
+ message = self.bridge.to_ros_image(
+ Image(frame), encoding="bgr8"
+ )
+ self.output_image_publisher.publish(message)
+ self.get_logger().info("Published annotated image")
+
+ if self.detection_publisher is not None:
+ detection_boxes = tracking_boxes.bounding_box_list()
+ ros_boxes = self.bridge.to_ros_boxes(detection_boxes)
+ self.detection_publisher.publish(ros_boxes)
+ self.get_logger().info("Published " + str(len(detection_boxes)) + " detection boxes")
+
+ if self.tracking_id_publisher is not None:
+ ids = [tracking_box.id for tracking_box in tracking_boxes]
+ ros_ids = Int32MultiArray()
+ ros_ids.data = ids
+ self.tracking_id_publisher.publish(ros_ids)
+ self.get_logger().info("Published " + str(len(ids)) + " tracking ids")
+
+
+colors = [
+ (255, 0, 255),
+ (0, 0, 255),
+ (0, 255, 0),
+ (255, 0, 0),
+ (35, 69, 55),
+ (43, 63, 54),
+]
+
+
+def draw_predictions(frame, predictions: TrackingAnnotationList, is_centered=False, is_flipped_xy=True):
+ global colors
+ w, h, _ = frame.shape
+
+ for prediction in predictions.boxes:
+ prediction = prediction
+
+ if not hasattr(prediction, "id"):
+ prediction.id = 0
+
+ color = colors[int(prediction.id) * 7 % len(colors)]
+
+ x = prediction.left
+ y = prediction.top
+
+ if is_flipped_xy:
+ x = prediction.top
+ y = prediction.left
+
+ if is_centered:
+ x -= prediction.width
+ y -= prediction.height
+
+ cv2.rectangle(
+ frame,
+ (int(x), int(y)),
+ (
+ int(x + prediction.width),
+ int(y + prediction.height),
+ ),
+ color,
+ 2,
+ )
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic",
+ help="Input Image topic provided by either an image_dataset_node, webcam or any other image node",
+ type=str, default="/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic",
+ help="Output annotated image topic with a visualization of detections and their ids",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_objects_annotated")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects")
+ parser.add_argument("-t", "--tracking_id_topic",
+ help="Output tracking ids topic with the same element count as in output_detection_topic",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/objects_tracking_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-n", "--model_name", help="Name of the trained model",
+ type=str, default="fairmot_dla34", choices=["fairmot_dla34"])
+ parser.add_argument("-td", "--temp_dir", help="Path to a temporary directory with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ fair_mot_node = ObjectTracking2DFairMotNode(
+ device=device,
+ model_name=args.model_name,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ temp_dir=args.temp_dir,
+ output_detection_topic=args.detections_topic,
+ output_tracking_id_topic=args.tracking_id_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ )
+
+ rclpy.spin(fair_mot_node)
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ fair_mot_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_siamrpn_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_siamrpn_node.py
new file mode 100644
index 0000000000..f2d49919ad
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_siamrpn_node.py
@@ -0,0 +1,179 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import mxnet as mx
+
+import cv2
+from math import dist
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Detection2D
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.engine.target import TrackingAnnotation, BoundingBox
+from opendr.perception.object_tracking_2d import SiamRPNLearner
+from opendr.perception.object_detection_2d import YOLOv3DetectorLearner
+
+
+class ObjectTrackingSiamRPNNode(Node):
+
+ def __init__(self, object_detector, input_rgb_image_topic="/image_raw",
+ output_rgb_image_topic="/opendr/image_tracking_annotated",
+ tracker_topic="/opendr/tracked_object",
+ device="cuda"):
+ """
+ Creates a ROS2 Node for object tracking with SiamRPN.
+ :param object_detector: An object detector learner to use for initialization
+ :type object_detector: opendr.engine.learners.Learner
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param tracker_topic: Topic to which we are publishing the annotation
+ :type tracker_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ super().__init__('opendr_object_tracking_2d_siamrpn_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if tracker_topic is not None:
+ self.object_publisher = self.create_publisher(Detection2D, tracker_topic, 1)
+ else:
+ self.object_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ self.object_detector = object_detector
+ # Initialize object tracker
+ self.tracker = SiamRPNLearner(device=device)
+ self.image = None
+ self.initialized = False
+
+ self.get_logger().info("Object Tracking 2D SiamRPN node started.")
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+ self.image = image
+
+ if not self.initialized:
+ # Run object detector to initialize the tracker
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+ boxes = self.object_detector.infer(image)
+
+ img_center = [int(image.data.shape[2] // 2), int(image.data.shape[1] // 2)] # width, height
+ # Find the box that is closest to the center of the image
+ center_box = BoundingBox("", left=0, top=0, width=0, height=0)
+ min_distance = dist([center_box.left, center_box.top], img_center)
+ for box in boxes:
+ new_distance = dist([int(box.left + box.width // 2), int(box.top + box.height // 2)], img_center)
+ if new_distance < min_distance and box.width > 32 and box.height > 32: # Ignore very small boxes
+ center_box = box
+ min_distance = dist([center_box.left, center_box.top], img_center)
+
+ # Initialize tracker with the most central box found
+ init_box = TrackingAnnotation(center_box.name,
+ center_box.left, center_box.top, center_box.width, center_box.height,
+ id=0, score=center_box.confidence)
+
+ self.tracker.infer(self.image, init_box)
+ self.initialized = True
+ self.get_logger().info("Object Tracking 2D SiamRPN node initialized with the most central bounding box.")
+
+ if self.initialized:
+ # Run object tracking
+ box = self.tracker.infer(image)
+
+ if self.object_publisher is not None:
+ # Publish detections in ROS message
+ ros_boxes = self.bridge.to_ros_single_tracking_annotation(box)
+ self.object_publisher.publish(ros_boxes)
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ cv2.rectangle(image, (box.left, box.top),
+ (box.left + box.width, box.top + box.height),
+ (0, 255, 255), 3)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_tracking_annotated")
+ parser.add_argument("-t", "--tracker_topic", help="Topic name for tracker messages",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/tracked_object")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and mx.context.num_gpus() > 0:
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ object_detector = YOLOv3DetectorLearner(backbone="darknet53", device=device)
+ object_detector.download(path=".", verbose=True)
+ object_detector.load("yolo_default")
+
+ object_tracker_2d_siamrpn_node = ObjectTrackingSiamRPNNode(object_detector=object_detector, device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ tracker_topic=args.tracker_topic)
+
+ rclpy.spin(object_tracker_2d_siamrpn_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ object_tracker_2d_siamrpn_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_3d_ab3dmot_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_3d_ab3dmot_node.py
new file mode 100644
index 0000000000..c0cfb95124
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_3d_ab3dmot_node.py
@@ -0,0 +1,177 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import torch
+import argparse
+import os
+import rclpy
+from rclpy.node import Node
+from vision_msgs.msg import Detection3DArray
+from std_msgs.msg import Int32MultiArray
+from sensor_msgs.msg import PointCloud as ROS_PointCloud
+from opendr_bridge import ROS2Bridge
+from opendr.perception.object_tracking_3d import ObjectTracking3DAb3dmotLearner
+from opendr.perception.object_detection_3d import VoxelObjectDetection3DLearner
+
+
+class ObjectTracking3DAb3dmotNode(Node):
+ def __init__(
+ self,
+ detector=None,
+ input_point_cloud_topic="/opendr/dataset_point_cloud",
+ output_detection3d_topic="/opendr/detection3d",
+ output_tracking3d_id_topic="/opendr/tracking3d_id",
+ device="cuda:0",
+ ):
+ """
+ Creates a ROS2 Node for 3D object tracking
+ :param detector: Learner that provides 3D object detections
+ :type detector: Learner
+ :param input_point_cloud_topic: Topic from which we are reading the input point cloud
+ :type input_point_cloud_topic: str
+ :param output_detection3d_topic: Topic to which we are publishing the annotations
+ :type output_detection3d_topic: str
+ :param output_tracking3d_id_topic: Topic to which we are publishing the tracking ids
+ :type output_tracking3d_id_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ super().__init__('opendr_object_tracking_3d_ab3dmot_node')
+
+ self.detector = detector
+ self.learner = ObjectTracking3DAb3dmotLearner(
+ device=device
+ )
+
+ # Initialize OpenDR ROSBridge object
+ self.bridge = ROS2Bridge()
+
+ if output_detection3d_topic is not None:
+ self.detection_publisher = self.create_publisher(
+ Detection3DArray, output_detection3d_topic, 1
+ )
+
+ if output_tracking3d_id_topic is not None:
+ self.tracking_id_publisher = self.create_publisher(
+ Int32MultiArray, output_tracking3d_id_topic, 1
+ )
+
+ self.create_subscription(ROS_PointCloud, input_point_cloud_topic, self.callback, 1)
+
+ self.get_logger().info("Object Tracking 3D Ab3dmot Node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that processes the input data and publishes to the corresponding topics.
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ point_cloud = self.bridge.from_ros_point_cloud(data)
+ detection_boxes = self.detector.infer(point_cloud)
+
+ # Convert detected boxes to ROS type and publish
+ if self.detection_publisher is not None:
+ ros_boxes = self.bridge.to_ros_boxes_3d(detection_boxes)
+ self.detection_publisher.publish(ros_boxes)
+ self.get_logger().info("Published " + str(len(detection_boxes)) + " detection boxes")
+
+ if self.tracking_id_publisher is not None:
+ tracking_boxes = self.learner.infer(detection_boxes)
+ ids = [tracking_box.id for tracking_box in tracking_boxes]
+ ros_ids = Int32MultiArray()
+ ros_ids.data = ids
+ self.tracking_id_publisher.publish(ros_ids)
+ self.get_logger().info("Published " + str(len(ids)) + " tracking ids")
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_point_cloud_topic",
+ help="Point Cloud topic provided by either a point_cloud_dataset_node or any other 3D Point Cloud Node",
+ type=str, default="/opendr/dataset_point_cloud")
+ parser.add_argument("-d", "--detections_topic",
+ help="Output detections topic",
+ type=lambda value: value if value.lower() != "none" else None, default="/opendr/objects3d")
+ parser.add_argument("-t", "--tracking3d_id_topic",
+ help="Output tracking ids topic with the same element count as in output_detection_topic",
+ type=lambda value: value if value.lower() != "none" else None, default="/opendr/objects_tracking_id")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("-dn", "--detector_model_name", help="Name of the trained model",
+ type=str, default="tanet_car_xyres_16", choices=["tanet_car_xyres_16"])
+ parser.add_argument(
+ "-dc", "--detector_model_config_path", help="Path to a model .proto config",
+ type=str, default=os.path.join(
+ "$OPENDR_HOME", "src", "opendr", "perception", "object_detection_3d",
+ "voxel_object_detection_3d", "second_detector", "configs", "tanet",
+ "car", "xyres_16.proto"
+ )
+ )
+ parser.add_argument("-td", "--temp_dir", help="Path to a temporary directory with models",
+ type=str, default="temp")
+ args = parser.parse_args()
+
+ input_point_cloud_topic = args.input_point_cloud_topic
+ detector_model_name = args.detector_model_name
+ temp_dir = args.temp_dir
+ detector_model_config_path = args.detector_model_config_path
+ output_detection3d_topic = args.detections_topic
+ output_tracking3d_id_topic = args.tracking3d_id_topic
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ detector = VoxelObjectDetection3DLearner(
+ device=device,
+ temp_path=temp_dir,
+ model_config_path=detector_model_config_path
+ )
+ if not os.path.exists(os.path.join(temp_dir, detector_model_name)):
+ VoxelObjectDetection3DLearner.download(detector_model_name, temp_dir)
+
+ detector.load(os.path.join(temp_dir, detector_model_name), verbose=True)
+
+ ab3dmot_node = ObjectTracking3DAb3dmotNode(
+ detector=detector,
+ device=device,
+ input_point_cloud_topic=input_point_cloud_topic,
+ output_detection3d_topic=output_detection3d_topic,
+ output_tracking3d_id_topic=output_tracking3d_id_topic,
+ )
+
+ rclpy.spin(ab3dmot_node)
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ ab3dmot_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/panoptic_segmentation_efficient_ps_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/panoptic_segmentation_efficient_ps_node.py
new file mode 100644
index 0000000000..e9459f6480
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/panoptic_segmentation_efficient_ps_node.py
@@ -0,0 +1,198 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+from pathlib import Path
+import argparse
+from typing import Optional
+
+import rclpy
+from rclpy.node import Node
+import matplotlib
+from sensor_msgs.msg import Image as ROS_Image
+
+from opendr_bridge import ROS2Bridge
+from opendr.perception.panoptic_segmentation import EfficientPsLearner
+
+# Avoid having a matplotlib GUI in a separate thread in the visualize() function
+matplotlib.use('Agg')
+
+
+class EfficientPsNode(Node):
+ def __init__(self,
+ input_rgb_image_topic: str,
+ checkpoint: str,
+ output_heatmap_topic: Optional[str] = None,
+ output_rgb_visualization_topic: Optional[str] = None,
+ detailed_visualization: bool = False
+ ):
+ """
+ Initialize the EfficientPS ROS2 node and create an instance of the respective learner class.
+ :param checkpoint: This is either a path to a saved model or one of [Cityscapes, KITTI] to download
+ pre-trained model weights.
+ :type checkpoint: str
+ :param input_rgb_image_topic: ROS topic for the input image stream
+ :type input_rgb_image_topic: str
+ :param output_heatmap_topic: ROS topic for the predicted semantic and instance maps
+ :type output_heatmap_topic: str
+ :param output_rgb_visualization_topic: ROS topic for the generated visualization of the panoptic map
+ :type output_rgb_visualization_topic: str
+ :param detailed_visualization: if True, generate a combined overview of the input RGB image and the
+ semantic, instance, and panoptic segmentation maps and publish it on output_rgb_visualization_topic
+ :type detailed_visualization: bool
+ """
+ super().__init__('opendr_efficient_panoptic_segmentation_node')
+
+ self.input_rgb_image_topic = input_rgb_image_topic
+ self.checkpoint = checkpoint
+ self.output_heatmap_topic = output_heatmap_topic
+ self.output_rgb_visualization_topic = output_rgb_visualization_topic
+ self.detailed_visualization = detailed_visualization
+
+ # Initialize all ROS2 related things
+ self._bridge = ROS2Bridge()
+ self._instance_heatmap_publisher = None
+ self._semantic_heatmap_publisher = None
+ self._visualization_publisher = None
+
+ # Initialize the panoptic segmentation network
+ config_file = Path(sys.modules[
+ EfficientPsLearner.__module__].__file__).parent / 'configs' / 'singlegpu_cityscapes.py'
+ self._learner = EfficientPsLearner(str(config_file))
+
+ # Other
+ self._tmp_folder = Path(__file__).parent.parent / 'tmp' / 'efficientps'
+ self._tmp_folder.mkdir(exist_ok=True, parents=True)
+
+ def _init_learner(self) -> bool:
+ """
+ The model can be initialized via
+ 1. downloading pre-trained weights for Cityscapes or KITTI.
+ 2. passing a path to an existing checkpoint file.
+
+ This has not been done in the __init__() function since logging is available only once the node is registered.
+ """
+ if self.checkpoint in ['cityscapes', 'kitti']:
+ file_path = EfficientPsLearner.download(str(self._tmp_folder),
+ trained_on=self.checkpoint)
+ self.checkpoint = file_path
+
+ if self._learner.load(self.checkpoint):
+ self.get_logger().info('Successfully loaded the checkpoint.')
+ return True
+ else:
+ self.get_logger().error('Failed to load the checkpoint.')
+ return False
+
+ def _init_subscriber(self):
+ """
+ Subscribe to all relevant topics.
+ """
+ self.image_subscriber = self.create_subscription(ROS_Image, self.input_rgb_image_topic,
+ self.callback, 1)
+
+ def _init_publisher(self):
+ """
+ Set up the publishers as requested by the user.
+ """
+ if self.output_heatmap_topic is not None:
+ self._instance_heatmap_publisher = self.create_publisher(ROS_Image,
+ f'{self.output_heatmap_topic}/instance',
+ 10)
+ self._semantic_heatmap_publisher = self.create_publisher(ROS_Image,
+ f'{self.output_heatmap_topic}/semantic',
+ 10)
+ if self.output_rgb_visualization_topic is not None:
+ self._visualization_publisher = self.create_publisher(ROS_Image,
+ self.output_rgb_visualization_topic,
+ 10)
+
+ def listen(self):
+ """
+ Start the node and begin processing input data. The order of the function calls ensures that the node does not
+ try to process input images without being in a trained state.
+ """
+ if self._init_learner():
+ self._init_publisher()
+ self._init_subscriber()
+ self.get_logger().info('EfficientPS node started!')
+ rclpy.spin(self)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ self.destroy_node()
+ rclpy.shutdown()
+
+ def callback(self, data: ROS_Image):
+ """
+ Predict the panoptic segmentation map from the input image and publish the results.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image to OpenDR Image
+ image = self._bridge.from_ros_image(data)
+
+ try:
+ # Retrieve a list of two OpenDR heatmaps: [instance map, semantic map]
+ prediction = self._learner.infer(image)
+
+ # The output topics are only published if there is at least one subscriber
+ if self._visualization_publisher is not None and self._visualization_publisher.get_subscription_count() > 0:
+ panoptic_image = EfficientPsLearner.visualize(image, prediction, show_figure=False,
+ detailed=self.detailed_visualization)
+ self._visualization_publisher.publish(self._bridge.to_ros_image(panoptic_image, encoding="rgb8"))
+
+ if self._instance_heatmap_publisher is not None and self._instance_heatmap_publisher.get_subscription_count() > 0:
+ self._instance_heatmap_publisher.publish(self._bridge.to_ros_image(prediction[0]))
+ if self._semantic_heatmap_publisher is not None and self._semantic_heatmap_publisher.get_subscription_count() > 0:
+ self._semantic_heatmap_publisher.publish(self._bridge.to_ros_image(prediction[1]))
+
+ except Exception as e:
+ self.get_logger().error(f'Failed to generate prediction: {e}')
+
+
+def main(args=None):
+ rclpy.init(args=args)
+ parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+ parser.add_argument('-i', '--input_rgb_image_topic', type=str, default='/image_raw',
+ help='listen to RGB images on this topic')
+ parser.add_argument('-oh', '--output_heatmap_topic',
+ type=lambda value: value if value.lower() != "none" else None,
+ default='/opendr/panoptic',
+ help='publish the semantic and instance maps on this topic as "OUTPUT_HEATMAP_TOPIC/semantic" \
+ and "OUTPUT_HEATMAP_TOPIC/instance"')
+ parser.add_argument('-ov', '--output_rgb_image_topic',
+ type=lambda value: value if value.lower() != "none" else None,
+ default='/opendr/panoptic/rgb_visualization',
+ help='publish the panoptic segmentation map as an RGB image on this topic or a more detailed \
+ overview if using the --detailed_visualization flag')
+ parser.add_argument('--detailed_visualization', action='store_true',
+ help='generate a combined overview of the input RGB image and the semantic, instance, and \
+ panoptic segmentation maps and publish it on OUTPUT_RGB_IMAGE_TOPIC')
+ parser.add_argument('--checkpoint', type=str, default='cityscapes',
+ help='download pretrained models [cityscapes, kitti] or load from the provided path')
+ args = parser.parse_args()
+
+ efficient_ps_node = EfficientPsNode(args.input_rgb_image_topic,
+ args.checkpoint,
+ args.output_heatmap_topic,
+ args.output_rgb_image_topic,
+ args.detailed_visualization)
+ efficient_ps_node.listen()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/point_cloud_dataset_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/point_cloud_dataset_node.py
new file mode 100644
index 0000000000..5ea7f129ff
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/point_cloud_dataset_node.py
@@ -0,0 +1,110 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import os
+import rclpy
+from rclpy.node import Node
+from sensor_msgs.msg import PointCloud as ROS_PointCloud
+from opendr_bridge import ROS2Bridge
+from opendr.engine.datasets import DatasetIterator
+from opendr.perception.object_detection_3d import KittiDataset, LabeledPointCloudsDatasetIterator
+
+
+class PointCloudDatasetNode(Node):
+ def __init__(
+ self,
+ dataset: DatasetIterator,
+ output_point_cloud_topic="/opendr/dataset_point_cloud",
+ data_fps=10,
+ ):
+ """
+ Creates a ROS Node for publishing dataset point clouds
+ """
+
+ super().__init__('opendr_point_cloud_dataset_node')
+
+ self.dataset = dataset
+ self.bridge = ROS2Bridge()
+ self.timer = self.create_timer(1.0 / data_fps, self.timer_callback)
+ self.sample_index = 0
+
+ self.output_point_cloud_publisher = self.create_publisher(
+ ROS_PointCloud, output_point_cloud_topic, 1
+ )
+ self.get_logger().info("Publishing point_cloud images.")
+
+ def timer_callback(self):
+
+ point_cloud = self.dataset[self.sample_index % len(self.dataset)][0]
+ # Dataset should have a (PointCloud, Target) pair as elements
+
+ message = self.bridge.to_ros_point_cloud(
+ point_cloud, self.get_clock().now().to_msg()
+ )
+ self.output_point_cloud_publisher.publish(message)
+
+ self.sample_index += 1
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-d", "--dataset_path",
+ help="Path to a dataset. If does not exist, nano KITTI dataset will be downloaded there.",
+ type=str, default="KITTI/opendr_nano_kitti")
+ parser.add_argument("-ks", "--kitti_subsets_path",
+ help="Path to kitti subsets. Used only if a KITTI dataset is downloaded",
+ type=str,
+ default="../../src/opendr/perception/object_detection_3d/datasets/nano_kitti_subsets")
+ parser.add_argument("-o", "--output_point_cloud_topic", help="Topic name to publish the data",
+ type=str, default="/opendr/dataset_point_cloud")
+ parser.add_argument("-f", "--fps", help="Data FPS",
+ type=float, default=10)
+ args = parser.parse_args()
+
+ dataset_path = args.dataset_path
+ kitti_subsets_path = args.kitti_subsets_path
+ output_point_cloud_topic = args.output_point_cloud_topic
+ data_fps = args.fps
+
+ if not os.path.exists(dataset_path):
+ dataset_path = KittiDataset.download_nano_kitti(
+ "KITTI", kitti_subsets_path=kitti_subsets_path,
+ create_dir=True,
+ ).path
+
+ dataset = LabeledPointCloudsDatasetIterator(
+ dataset_path + "/training/velodyne_reduced",
+ dataset_path + "/training/label_2",
+ dataset_path + "/training/calib",
+ )
+
+ dataset_node = PointCloudDatasetNode(
+ dataset, output_point_cloud_topic=output_point_cloud_topic, data_fps=data_fps
+ )
+
+ rclpy.spin(dataset_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ dataset_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/pose_estimation_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/pose_estimation_node.py
new file mode 100644
index 0000000000..9193517314
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/pose_estimation_node.py
@@ -0,0 +1,169 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+from opendr_interface.msg import OpenDRPose2D
+
+from opendr.engine.data import Image
+from opendr.perception.pose_estimation import draw
+from opendr.perception.pose_estimation import LightweightOpenPoseLearner
+
+
+class PoseEstimationNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw", output_rgb_image_topic="/opendr/image_pose_annotated",
+ detections_topic="/opendr/poses", device="cuda",
+ num_refinement_stages=2, use_stride=False, half_precision=False):
+ """
+ Creates a ROS2 Node for pose estimation with Lightweight OpenPose.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
+ image is published)
+ :type output_rgb_image_topic: str
+ :param detections_topic: Topic to which we are publishing the annotations (if None, no pose detection message
+ is published)
+ :type detections_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param num_refinement_stages: Specifies the number of pose estimation refinement stages are added on the
+ model's head, including the initial stage. Can be 0, 1 or 2, with more stages meaning slower and more accurate
+ inference
+ :type num_refinement_stages: int
+ :param use_stride: Whether to add a stride value in the model, which reduces accuracy but increases
+ inference speed
+ :type use_stride: bool
+ :param half_precision: Enables inference using half (fp16) precision instead of single (fp32) precision.
+ Valid only for GPU-based inference
+ :type half_precision: bool
+ """
+ super().__init__('opendr_pose_estimation_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if detections_topic is not None:
+ self.pose_publisher = self.create_publisher(OpenDRPose2D, detections_topic, 1)
+ else:
+ self.pose_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=num_refinement_stages,
+ mobilenet_use_stride=use_stride,
+ half_precision=half_precision)
+ self.pose_estimator.download(path=".", verbose=True)
+ self.pose_estimator.load("openpose_default")
+
+ self.get_logger().info("Pose estimation node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run pose estimation
+ poses = self.pose_estimator.infer(image)
+
+ # Publish detections in ROS message
+ for pose in poses:
+ if self.pose_publisher is not None:
+ # Convert OpenDR pose to ROS2 pose message using bridge and publish it
+ self.pose_publisher.publish(self.bridge.to_ros_pose(pose))
+
+ if self.image_publisher is not None:
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image with poses
+ for pose in poses:
+ draw(image, pose)
+ # Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
+ self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image, if \"None\" "
+ "no output image is published",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_pose_annotated")
+ parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages, if \"None\" "
+ "no detection message is published",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/poses")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--accelerate", help="Enables acceleration flags (e.g., stride)", default=False,
+ action="store_true")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ if args.accelerate:
+ stride = True
+ stages = 0
+ half_prec = True
+ else:
+ stride = False
+ stages = 2
+ half_prec = False
+
+ pose_estimator_node = PoseEstimationNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ detections_topic=args.detections_topic,
+ num_refinement_stages=stages, use_stride=stride, half_precision=half_prec)
+
+ rclpy.spin(pose_estimator_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ pose_estimator_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/rgbd_hand_gesture_recognition_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/rgbd_hand_gesture_recognition_node.py
new file mode 100755
index 0000000000..8b73944192
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/rgbd_hand_gesture_recognition_node.py
@@ -0,0 +1,166 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import os
+import cv2
+import numpy as np
+import torch
+
+import rclpy
+from rclpy.node import Node
+import message_filters
+from sensor_msgs.msg import Image as ROS_Image
+from vision_msgs.msg import Classification2D
+
+from opendr_bridge import ROS2Bridge
+from opendr.engine.data import Image
+from opendr.perception.multimodal_human_centric import RgbdHandGestureLearner
+
+
+class RgbdHandGestureNode(Node):
+
+ def __init__(self, input_rgb_image_topic="/kinect2/qhd/image_color_rect",
+ input_depth_image_topic="/kinect2/qhd/image_depth_rect",
+ output_gestures_topic="/opendr/gestures", device="cuda", delay=0.1):
+ """
+ Creates a ROS2 Node for gesture recognition from RGBD. Assuming that the following drivers have been installed:
+ https://github.com/OpenKinect/libfreenect2 and https://github.com/code-iai/iai_kinect2.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param input_depth_image_topic: Topic from which we are reading the input depth image
+ :type input_depth_image_topic: str
+ :param output_gestures_topic: Topic to which we are publishing the predicted gesture class
+ :type output_gestures_topic: str
+ :param device: Device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param delay: Define the delay (in seconds) with which rgb message and depth message can be synchronized
+ :type delay: float
+ """
+ super().__init__("opendr_rgbd_hand_gesture_recognition_node")
+
+ self.gesture_publisher = self.create_publisher(Classification2D, output_gestures_topic, 1)
+
+ image_sub = message_filters.Subscriber(self, ROS_Image, input_rgb_image_topic, qos_profile=1)
+ depth_sub = message_filters.Subscriber(self, ROS_Image, input_depth_image_topic, qos_profile=1)
+ # synchronize image and depth data topics
+ ts = message_filters.ApproximateTimeSynchronizer([image_sub, depth_sub], queue_size=10, slop=delay)
+ ts.registerCallback(self.callback)
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the gesture recognition
+ self.gesture_learner = RgbdHandGestureLearner(n_class=16, architecture="mobilenet_v2", device=device)
+ model_path = './mobilenet_v2'
+ if not os.path.exists(model_path):
+ self.gesture_learner.download(path=model_path)
+ self.gesture_learner.load(path=model_path)
+
+ # mean and std for preprocessing, based on HANDS dataset
+ self.mean = np.asarray([0.485, 0.456, 0.406, 0.0303]).reshape(1, 1, 4)
+ self.std = np.asarray([0.229, 0.224, 0.225, 0.0353]).reshape(1, 1, 4)
+
+ self.get_logger().info("RGBD gesture recognition node started!")
+
+ def callback(self, rgb_data, depth_data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param rgb_data: input image message
+ :type rgb_data: sensor_msgs.msg.Image
+ :param depth_data: input depth image message
+ :type depth_data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image and preprocess
+ rgb_image = self.bridge.from_ros_image(rgb_data, encoding='bgr8')
+ depth_data.encoding = 'mono16'
+ depth_image = self.bridge.from_ros_image_to_depth(depth_data, encoding='mono16')
+ img = self.preprocess(rgb_image, depth_image)
+
+ # Run gesture recognition
+ gesture_class = self.gesture_learner.infer(img)
+
+ # Publish results
+ ros_gesture = self.bridge.from_category_to_rosclass(gesture_class, self.get_clock().now().to_msg())
+ self.gesture_publisher.publish(ros_gesture)
+
+ def preprocess(self, rgb_image, depth_image):
+ """
+ Preprocess rgb_image, depth_image and concatenate them
+ :param rgb_image: input RGB image
+ :type rgb_image: engine.data.Image
+ :param depth_image: input depth image
+ :type depth_image: engine.data.Image
+ """
+ rgb_image = rgb_image.convert(format='channels_last') / (2**8 - 1)
+ depth_image = depth_image.convert(format='channels_last') / (2**16 - 1)
+
+ # resize the images to 224x224
+ rgb_image = cv2.resize(rgb_image, (224, 224))
+ depth_image = cv2.resize(depth_image, (224, 224))
+
+ # concatenate and standardize
+ img = np.concatenate([rgb_image, np.expand_dims(depth_image, axis=-1)], axis=-1)
+ img = (img - self.mean) / self.std
+ img = Image(img, dtype=np.float32)
+ return img
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ # Default topics are according to kinectv2 drivers at https://github.com/OpenKinect/libfreenect2
+ # and https://github.com/code-iai-iai_kinect2
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-ic", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/kinect2/qhd/image_color_rect")
+ parser.add_argument("-id", "--input_depth_image_topic", help="Topic name for input depth image",
+ type=str, default="/kinect2/qhd/image_depth_rect")
+ parser.add_argument("-o", "--output_gestures_topic", help="Topic name for predicted gesture class",
+ type=str, default="/opendr/gestures")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda",
+ choices=["cuda", "cpu"])
+ parser.add_argument("--delay", help="The delay (in seconds) with which RGB message and"
+ "depth message can be synchronized", type=float, default=0.1)
+
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
+ except:
+ print("Using CPU")
+ device = "cpu"
+
+ gesture_node = RgbdHandGestureNode(input_rgb_image_topic=args.input_rgb_image_topic,
+ input_depth_image_topic=args.input_depth_image_topic,
+ output_gestures_topic=args.output_gestures_topic, device=device,
+ delay=args.delay)
+
+ rclpy.spin(gesture_node)
+
+ gesture_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/semantic_segmentation_bisenet_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/semantic_segmentation_bisenet_node.py
new file mode 100644
index 0000000000..91f860bbd1
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/semantic_segmentation_bisenet_node.py
@@ -0,0 +1,197 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import numpy as np
+import torch
+import cv2
+import colorsys
+
+import rclpy
+from rclpy.node import Node
+
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Image
+from opendr.engine.target import Heatmap
+from opendr.perception.semantic_segmentation import BisenetLearner
+
+
+class BisenetNode(Node):
+
+ def __init__(self, input_rgb_image_topic="/usb_cam/image_raw", output_heatmap_topic="/opendr/heatmap",
+ output_rgb_image_topic="/opendr/heatmap_visualization", device="cuda"):
+ """
+ Creates a ROS2 Node for semantic segmentation with Bisenet.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_heatmap_topic: Topic to which we are publishing the heatmap in the form of a ROS image containing
+ class ids
+ :type output_heatmap_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the heatmap image blended with the
+ input image and a class legend for visualization purposes
+ :type output_rgb_image_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ """
+ super().__init__('opendr_semantic_segmentation_bisenet_node')
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_heatmap_topic is not None:
+ self.heatmap_publisher = self.create_publisher(ROS_Image, output_heatmap_topic, 1)
+ else:
+ self.heatmap_publisher = None
+
+ if output_rgb_image_topic is not None:
+ self.visualization_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.visualization_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the semantic segmentation model
+ self.learner = BisenetLearner(device=device)
+ self.learner.download(path="bisenet_camvid")
+ self.learner.load("bisenet_camvid")
+
+ self.class_names = ["Bicyclist", "Building", "Car", "Column Pole", "Fence", "Pedestrian", "Road", "Sidewalk",
+ "Sign Symbol", "Sky", "Tree", "Unknown"]
+ self.colors = self.getDistinctColors(len(self.class_names)) # Generate n distinct colors
+
+ self.get_logger().info("Semantic segmentation bisenet node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics.
+ :param data: Input image message
+ :type data: sensor_msgs.msg.Image
+ """
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ try:
+ # Run semantic segmentation to retrieve the OpenDR heatmap
+ heatmap = self.learner.infer(image)
+
+ # Publish heatmap in the form of an image containing class ids
+ if self.heatmap_publisher is not None:
+ heatmap = Heatmap(heatmap.data.astype(np.uint8)) # Convert to uint8
+ self.heatmap_publisher.publish(self.bridge.to_ros_image(heatmap))
+
+ # Publish heatmap color visualization blended with the input image and a class color legend
+ if self.visualization_publisher is not None:
+ heatmap_colors = Image(self.colors[heatmap.numpy()])
+ image = Image(cv2.resize(image.convert("channels_last", "bgr"), (960, 720)))
+ alpha = 0.4 # 1.0 means full input image, 0.0 means full heatmap
+ beta = (1.0 - alpha)
+ image_blended = cv2.addWeighted(image.opencv(), alpha, heatmap_colors.opencv(), beta, 0.0)
+ # Add a legend
+ image_blended = self.addLegend(image_blended, np.unique(heatmap.data))
+
+ self.visualization_publisher.publish(self.bridge.to_ros_image(Image(image_blended),
+ encoding='bgr8'))
+ except Exception:
+ self.get_logger().warn('Failed to generate prediction.')
+
+ def addLegend(self, image, unique_class_ints):
+ # Text setup
+ origin_x, origin_y = 5, 5 # Text origin x, y
+ color_rectangle_size = 25
+ font_size = 1.0
+ font_thickness = 2
+ w_max = 0
+ for i in range(len(unique_class_ints)):
+ text = self.class_names[unique_class_ints[i]] # Class name
+ x, y = origin_x, origin_y + i * color_rectangle_size # Text position
+ # Determine class color and convert to regular integers
+ color = (int(self.colors[unique_class_ints[i]][0]),
+ int(self.colors[unique_class_ints[i]][1]),
+ int(self.colors[unique_class_ints[i]][2]))
+ # Get text width and height
+ (w, h), _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, font_size, font_thickness)
+ if w >= w_max:
+ w_max = w
+ # Draw partial background rectangle
+ image = cv2.rectangle(image, (x - origin_x, y),
+ (x + origin_x + color_rectangle_size + w_max,
+ y + color_rectangle_size),
+ (255, 255, 255, 0.5), -1)
+ # Draw color rectangle
+ image = cv2.rectangle(image, (x, y),
+ (x + color_rectangle_size, y + color_rectangle_size), color, -1)
+ # Draw class name text
+ image = cv2.putText(image, text, (x + color_rectangle_size + 2, y + h),
+ cv2.FONT_HERSHEY_SIMPLEX, font_size, (0, 0, 0), font_thickness)
+ return image
+
+ @staticmethod
+ def HSVToRGB(h, s, v):
+ (r, g, b) = colorsys.hsv_to_rgb(h, s, v)
+ return np.array([int(255 * r), int(255 * g), int(255 * b)])
+
+ def getDistinctColors(self, n):
+ huePartition = 1.0 / (n + 1)
+ return np.array([self.HSVToRGB(huePartition * value, 1.0, 1.0) for value in range(0, n)]).astype(np.uint8)
+
+
+def main(args=None):
+ rclpy.init(args=args)
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_heatmap_topic", help="Topic to which we are publishing the heatmap in the form "
+ "of a ROS image containing class ids",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/heatmap")
+ parser.add_argument("-ov", "--output_rgb_image_topic", help="Topic to which we are publishing the heatmap image "
+ "blended with the input image and a class legend for "
+ "visualization purposes",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/heatmap_visualization")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ bisenet_node = BisenetNode(device=device,
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_heatmap_topic=args.output_heatmap_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic)
+
+ rclpy.spin(bisenet_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ bisenet_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/skeleton_based_action_recognition_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/skeleton_based_action_recognition_node.py
new file mode 100644
index 0000000000..dce55a5630
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/skeleton_based_action_recognition_node.py
@@ -0,0 +1,249 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+import numpy as np
+
+import rclpy
+from rclpy.node import Node
+from std_msgs.msg import String
+from vision_msgs.msg import ObjectHypothesis
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+from opendr_interface.msg import OpenDRPose2D
+
+from opendr.engine.data import Image
+from opendr.perception.pose_estimation import draw
+from opendr.perception.pose_estimation import LightweightOpenPoseLearner
+from opendr.perception.skeleton_based_action_recognition import SpatioTemporalGCNLearner
+from opendr.perception.skeleton_based_action_recognition import ProgressiveSpatioTemporalGCNLearner
+
+
+class SkeletonActionRecognitionNode(Node):
+
+ def __init__(self, input_rgb_image_topic="image_raw",
+ output_rgb_image_topic="/opendr/image_pose_annotated",
+ pose_annotations_topic="/opendr/poses",
+ output_category_topic="/opendr/skeleton_recognized_action",
+ output_category_description_topic="/opendr/skeleton_recognized_action_description",
+ device="cuda", model='stgcn'):
+ """
+ Creates a ROS2 Node for skeleton-based action recognition
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
+ annotated image)
+ :type output_rgb_image_topic: str
+ :param pose_annotations_topic: Topic to which we are publishing the annotations (if None, we are not publishing
+ annotated pose annotations)
+ :type pose_annotations_topic: str
+ :param output_category_topic: Topic to which we are publishing the recognized action category info
+ (if None, we are not publishing the info)
+ :type output_category_topic: str
+ :param output_category_description_topic: Topic to which we are publishing the description of the recognized
+ action (if None, we are not publishing the description)
+ :type output_category_description_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model: model to use for skeleton-based action recognition.
+ (Options: 'stgcn', 'pstgcn')
+ :type model: str
+ """
+ super().__init__('opendr_skeleton_based_action_recognition_node')
+ # Set up ROS topics and bridge
+
+ self.image_subscriber = self.create_subscription(ROS_Image, input_rgb_image_topic, self.callback, 1)
+
+ if output_rgb_image_topic is not None:
+ self.image_publisher = self.create_publisher(ROS_Image, output_rgb_image_topic, 1)
+ else:
+ self.image_publisher = None
+
+ if pose_annotations_topic is not None:
+ self.pose_publisher = self.create_publisher(OpenDRPose2D, pose_annotations_topic, 1)
+ else:
+ self.pose_publisher = None
+
+ if output_category_topic is not None:
+ self.hypothesis_publisher = self.create_publisher(ObjectHypothesis, output_category_topic, 1)
+ else:
+ self.hypothesis_publisher = None
+
+ if output_category_description_topic is not None:
+ self.string_publisher = self.create_publisher(String, output_category_description_topic, 1)
+ else:
+ self.string_publisher = None
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the pose estimation
+ self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=2,
+ mobilenet_use_stride=False,
+ half_precision=False
+ )
+ self.pose_estimator.download(path=".", verbose=True)
+ self.pose_estimator.load("openpose_default")
+
+ # Initialize the skeleton_based action recognition
+ if model == 'stgcn':
+ self.action_classifier = SpatioTemporalGCNLearner(device=device, dataset_name='nturgbd_cv',
+ method_name=model, in_channels=2, num_point=18,
+ graph_type='openpose')
+ elif model == 'pstgcn':
+ self.action_classifier = ProgressiveSpatioTemporalGCNLearner(device=device, dataset_name='nturgbd_cv',
+ topology=[5, 4, 5, 2, 3, 4, 3, 4],
+ in_channels=2, num_point=18,
+ graph_type='openpose')
+
+ model_saved_path = self.action_classifier.download(path="./pretrained_models/"+model,
+ method_name=model, mode="pretrained",
+ file_name=model+'_ntu_cv_lw_openpose')
+ self.action_classifier.load(model_saved_path, model+'_ntu_cv_lw_openpose')
+
+ self.get_logger().info("Skeleton-based action recognition node started!")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+
+ # Convert sensor_msgs.msg.Image into OpenDR Image
+ image = self.bridge.from_ros_image(data, encoding='bgr8')
+
+ # Run pose estimation
+ poses = self.pose_estimator.infer(image)
+ if len(poses) > 2:
+ # select two poses with highest energy
+ poses = _select_2_poses(poses)
+
+ # Get an OpenCV image back
+ image = image.opencv()
+ # Annotate image and publish results
+ for pose in poses:
+ if self.pose_publisher is not None:
+ ros_pose = self.bridge.to_ros_pose(pose)
+ self.pose_publisher.publish(ros_pose)
+ # We get can the data back using self.bridge.from_ros_pose(ros_pose)
+ # e.g., opendr_pose = self.bridge.from_ros_pose(ros_pose)
+ draw(image, pose)
+
+ if self.image_publisher is not None:
+ message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
+ self.image_publisher.publish(message)
+
+ num_frames = 300
+ poses_list = []
+ for _ in range(num_frames):
+ poses_list.append(poses)
+ skeleton_seq = _pose2numpy(num_frames, poses_list)
+
+ # Run action recognition
+ category = self.action_classifier.infer(skeleton_seq)
+ category.confidence = float(category.confidence.max())
+
+ if self.hypothesis_publisher is not None:
+ self.hypothesis_publisher.publish(self.bridge.to_ros_category(category))
+
+ if self.string_publisher is not None:
+ self.string_publisher.publish(self.bridge.to_ros_category_description(category))
+
+
+def _select_2_poses(poses):
+ selected_poses = []
+ energy = []
+ for i in range(len(poses)):
+ s = poses[i].data[:, 0].std() + poses[i].data[:, 1].std()
+ energy.append(s)
+ energy = np.array(energy)
+ index = energy.argsort()[::-1][0:2]
+ for i in range(len(index)):
+ selected_poses.append(poses[index[i]])
+ return selected_poses
+
+
+def _pose2numpy(num_current_frames, poses_list):
+ C = 2
+ T = 300
+ V = 18
+ M = 2 # num_person_in
+ skeleton_seq = np.zeros((1, C, T, V, M))
+ for t in range(num_current_frames):
+ for m in range(len(poses_list[t])):
+ skeleton_seq[0, 0:2, t, :, m] = np.transpose(poses_list[t][m].data)
+ return skeleton_seq
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input image",
+ type=str, default="image_raw")
+ parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated image",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/image_pose_annotated")
+ parser.add_argument("-p", "--pose_annotations_topic", help="Topic name for pose annotations",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/poses")
+ parser.add_argument("-c", "--output_category_topic", help="Topic name for recognized action category",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/skeleton_recognized_action")
+ parser.add_argument("-d", "--output_category_description_topic", help="Topic name for description of the "
+ "recognized action category",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/skeleton_recognized_action_description")
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model to use, either \"stgcn\" or \"pstgcn\"",
+ type=str, default="stgcn", choices=["stgcn", "pstgcn"])
+
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ skeleton_action_recognition_node = \
+ SkeletonActionRecognitionNode(input_rgb_image_topic=args.input_rgb_image_topic,
+ output_rgb_image_topic=args.output_rgb_image_topic,
+ pose_annotations_topic=args.pose_annotations_topic,
+ output_category_topic=args.output_category_topic,
+ output_category_description_topic=args.output_category_description_topic,
+ device=device,
+ model=args.model)
+
+ rclpy.spin(skeleton_action_recognition_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ skeleton_action_recognition_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/speech_command_recognition_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/speech_command_recognition_node.py
new file mode 100755
index 0000000000..d15f26433a
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/speech_command_recognition_node.py
@@ -0,0 +1,147 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+import numpy as np
+
+import rclpy
+from rclpy.node import Node
+from audio_common_msgs.msg import AudioData
+from vision_msgs.msg import Classification2D
+
+from opendr_bridge import ROS2Bridge
+from opendr.engine.data import Timeseries
+from opendr.perception.speech_recognition import MatchboxNetLearner, EdgeSpeechNetsLearner, QuadraticSelfOnnLearner
+
+
+class SpeechRecognitionNode(Node):
+
+ def __init__(self, input_audio_topic="/audio", output_speech_command_topic="/opendr/speech_recognition",
+ buffer_size=1.5, model="matchboxnet", model_path=None, device="cuda"):
+ """
+ Creates a ROS2 Node for speech command recognition
+ :param input_audio_topic: Topic from which the audio data is received
+ :type input_audio_topic: str
+ :param output_speech_command_topic: Topic to which the predictions are published
+ :type output_speech_command_topic: str
+ :param buffer_size: Length of the audio buffer in seconds
+ :type buffer_size: float
+ :param model: base speech command recognition model: matchboxnet or quad_selfonn
+ :type model: str
+ :param device: device for inference ("cpu" or "cuda")
+ :type device: str
+
+ """
+ super().__init__("opendr_speech_command_recognition_node")
+
+ self.publisher = self.create_publisher(Classification2D, output_speech_command_topic, 1)
+
+ self.create_subscription(AudioData, input_audio_topic, self.callback, 1)
+
+ self.bridge = ROS2Bridge()
+
+ # Initialize the internal audio buffer
+ self.buffer_size = buffer_size
+ self.data_buffer = np.zeros((1, 1))
+
+ # Initialize the recognition model
+ if model == "matchboxnet":
+ self.learner = MatchboxNetLearner(output_classes_n=20, device=device)
+ load_path = "./MatchboxNet"
+ elif model == "edgespeechnets":
+ self.learner = EdgeSpeechNetsLearner(output_classes_n=20, device=device)
+ assert model_path is not None, "No pretrained EdgeSpeechNets model available for download"
+ elif model == "quad_selfonn":
+ self.learner = QuadraticSelfOnnLearner(output_classes_n=20, device=device)
+ load_path = "./QuadraticSelfOnn"
+
+ # Download the recognition model
+ if model_path is None:
+ self.learner.download_pretrained(path=".")
+ self.learner.load(load_path)
+ else:
+ self.learner.load(model_path)
+
+ self.get_logger().info("Speech command recognition node started!")
+
+ def callback(self, msg_data):
+ """
+ Callback that processes the input data and publishes predictions to the output topic
+ :param msg_data: incoming message
+ :type msg_data: audio_common_msgs.msg.AudioData
+ """
+ # Accumulate data until the buffer is full
+ data = np.reshape(np.frombuffer(msg_data.data, dtype=np.int16)/32768.0, (1, -1))
+ self.data_buffer = np.append(self.data_buffer, data, axis=1)
+
+ if self.data_buffer.shape[1] > 16000*self.buffer_size:
+
+ # Convert sample to OpenDR Timeseries and perform classification
+ input_sample = Timeseries(self.data_buffer)
+ class_pred = self.learner.infer(input_sample)
+
+ # Publish output
+ ros_class = self.bridge.from_category_to_rosclass(class_pred, self.get_clock().now().to_msg())
+ self.publisher.publish(ros_class)
+
+ # Reset the audio buffer
+ self.data_buffer = np.zeros((1, 1))
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_audio_topic", type=str, default="/audio",
+ help="Listen to input data on this topic")
+ parser.add_argument("-o", "--output_speech_command_topic", type=str, default="/opendr/speech_recognition",
+ help="Topic name for speech command output")
+ parser.add_argument("--device", type=str, default="cuda", choices=["cuda", "cpu"],
+ help="Device to use (cpu, cuda)")
+ parser.add_argument("--buffer_size", type=float, default=1.5, help="Size of the audio buffer in seconds")
+ parser.add_argument("--model", default="matchboxnet", choices=["matchboxnet", "edgespeechnets", "quad_selfonn"],
+ help="Model to be used for prediction: matchboxnet, edgespeechnets or quad_selfonn")
+ parser.add_argument("--model_path", type=str,
+ help="Path to the model files, if not given, the pretrained model will be downloaded")
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU")
+ device = "cpu"
+ except:
+ print("Using CPU")
+ device = "cpu"
+
+ speech_node = SpeechRecognitionNode(input_audio_topic=args.input_audio_topic,
+ output_speech_command_topic=args.output_speech_command_topic,
+ buffer_size=args.buffer_size, model=args.model, model_path=args.model_path,
+ device=device)
+
+ rclpy.spin(speech_node)
+
+ speech_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/opendr_perception/video_activity_recognition_node.py b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/video_activity_recognition_node.py
new file mode 100644
index 0000000000..9e137036b8
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/opendr_perception/video_activity_recognition_node.py
@@ -0,0 +1,250 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+import torchvision
+import cv2
+import rclpy
+from rclpy.node import Node
+from pathlib import Path
+
+from std_msgs.msg import String
+from vision_msgs.msg import ObjectHypothesis
+from sensor_msgs.msg import Image as ROS_Image
+from opendr_bridge import ROS2Bridge
+
+from opendr.engine.data import Video, Image
+from opendr.perception.activity_recognition import CLASSES as KINETICS400_CLASSES
+from opendr.perception.activity_recognition import CoX3DLearner
+from opendr.perception.activity_recognition import X3DLearner
+
+
+class HumanActivityRecognitionNode(Node):
+ def __init__(
+ self,
+ input_rgb_image_topic="image_raw",
+ output_category_topic="/opendr/human_activity_recognition",
+ output_category_description_topic="/opendr/human_activity_recognition_description",
+ device="cuda",
+ model="cox3d-m",
+ ):
+ """
+ Creates a ROS2 Node for video-based human activity recognition.
+ :param input_rgb_image_topic: Topic from which we are reading the input image
+ :type input_rgb_image_topic: str
+ :param output_category_topic: Topic to which we are publishing the recognized activity
+ (if None, we are not publishing the info)
+ :type output_category_topic: str
+ :param output_category_description_topic: Topic to which we are publishing the ID of the recognized action
+ (if None, we are not publishing the ID)
+ :type output_category_description_topic: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param model: Architecture to use for human activity recognition.
+ (Options: 'cox3d-s', 'cox3d-m', 'cox3d-l', 'x3d-xs', 'x3d-s', 'x3d-m', 'x3d-l')
+ :type model: str
+ """
+ super().__init__("opendr_video_human_activity_recognition_node")
+ assert model in {
+ "cox3d-s",
+ "cox3d-m",
+ "cox3d-l",
+ "x3d-xs",
+ "x3d-s",
+ "x3d-m",
+ "x3d-l",
+ }
+ model_name, model_size = model.split("-")
+ Learner = {"cox3d": CoX3DLearner, "x3d": X3DLearner}[model_name]
+
+ # Initialize the human activity recognition
+ self.learner = Learner(device=device, backbone=model_size)
+ self.learner.download(path="model_weights", model_names={model_size})
+ self.learner.load(Path("model_weights") / f"x3d_{model_size}.pyth")
+
+ # Set up preprocessing
+ if model_name == "cox3d":
+ self.preprocess = _image_preprocess(
+ image_size=self.learner.model_hparams["image_size"]
+ )
+ else: # == x3d
+ self.preprocess = _video_preprocess(
+ image_size=self.learner.model_hparams["image_size"],
+ window_size=self.learner.model_hparams["frames_per_clip"],
+ )
+
+ # Set up ROS topics and bridge
+ self.image_subscriber = self.create_subscription(
+ ROS_Image, input_rgb_image_topic, self.callback, 1
+ )
+ self.hypothesis_publisher = (
+ self.create_publisher(ObjectHypothesis, output_category_topic, 1)
+ if output_category_topic
+ else None
+ )
+ self.string_publisher = (
+ self.create_publisher(String, output_category_description_topic, 1)
+ if output_category_description_topic
+ else None
+ )
+ self.bridge = ROS2Bridge()
+ self.get_logger().info("Video Human Activity Recognition node initialized.")
+
+ def callback(self, data):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param data: input message
+ :type data: sensor_msgs.msg.Image
+ """
+ image = self.bridge.from_ros_image(data, encoding="rgb8")
+ if image is None:
+ return
+
+ x = self.preprocess(image.convert("channels_first", "rgb"))
+
+ result = self.learner.infer(x)
+ assert len(result) == 1
+ category = result[0]
+ # Confidence for predicted class
+ category.confidence = float(category.confidence.max())
+ category.description = KINETICS400_CLASSES[category.data] # Class name
+
+ if self.hypothesis_publisher is not None:
+ self.hypothesis_publisher.publish(self.bridge.to_ros_category(category))
+
+ if self.string_publisher is not None:
+ self.string_publisher.publish(
+ self.bridge.to_ros_category_description(category)
+ )
+
+
+def _resize(image, size=None, inter=cv2.INTER_AREA):
+ # initialize the dimensions of the image to be resized and
+ # grab the image size
+ dim = None
+ (h, w) = image.shape[:2]
+
+ if h > w:
+ # calculate the ratio of the width and construct the
+ # dimensions
+ r = size / float(w)
+ dim = (size, int(h * r))
+ else:
+ # calculate the ratio of the height and construct the
+ # dimensions
+ r = size / float(h)
+ dim = (int(w * r), size)
+
+ # resize the image
+ resized = cv2.resize(image, dim, interpolation=inter)
+
+ # return the resized image
+ return resized
+
+
+def _image_preprocess(image_size: int):
+ standardize = torchvision.transforms.Normalize(
+ mean=(0.45, 0.45, 0.45), std=(0.225, 0.225, 0.225)
+ )
+
+ def wrapped(frame):
+ nonlocal standardize
+ frame = frame.transpose((1, 2, 0)) # C, H, W -> H, W, C
+ frame = _resize(frame, size=image_size)
+ frame = torch.tensor(frame).permute((2, 0, 1)) # H, W, C -> C, H, W
+ frame = frame / 255.0 # [0, 255] -> [0.0, 1.0]
+ frame = standardize(frame)
+ return Image(frame, dtype=float)
+
+ return wrapped
+
+
+def _video_preprocess(image_size: int, window_size: int):
+ frames = []
+
+ standardize = torchvision.transforms.Normalize(
+ mean=(0.45, 0.45, 0.45), std=(0.225, 0.225, 0.225)
+ )
+
+ def wrapped(frame):
+ nonlocal frames, standardize
+ frame = frame.transpose((1, 2, 0)) # C, H, W -> H, W, C
+ frame = _resize(frame, size=image_size)
+ frame = torch.tensor(frame).permute((2, 0, 1)) # H, W, C -> C, H, W
+ frame = frame / 255.0 # [0, 255] -> [0.0, 1.0]
+ frame = standardize(frame)
+ if not frames:
+ frames = [frame for _ in range(window_size)]
+ else:
+ frames.pop(0)
+ frames.append(frame)
+ vid = Video(torch.stack(frames, dim=1))
+ return vid
+
+ return wrapped
+
+
+def main(args=None):
+ rclpy.init(args=args)
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
+ type=str, default="/image_raw")
+ parser.add_argument("-o", "--output_category_topic", help="Topic to which we are publishing the recognized activity",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/human_activity_recognition")
+ parser.add_argument("-od", "--output_category_description_topic",
+ help="Topic to which we are publishing the ID of the recognized action",
+ type=lambda value: value if value.lower() != "none" else None,
+ default="/opendr/human_activity_recognition_description")
+ parser.add_argument("--device", help='Device to use, either "cpu" or "cuda", defaults to "cuda"',
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Architecture to use for human activity recognition.",
+ type=str, default="cox3d-m",
+ choices=["cox3d-s", "cox3d-m", "cox3d-l", "x3d-xs", "x3d-s", "x3d-m", "x3d-l"])
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except Exception:
+ print("Using CPU.")
+ device = "cpu"
+
+ human_activity_recognition_node = HumanActivityRecognitionNode(
+ input_rgb_image_topic=args.input_rgb_image_topic,
+ output_category_topic=args.output_category_topic,
+ output_category_description_topic=args.output_category_description_topic,
+ device=device,
+ model=args.model,
+ )
+ rclpy.spin(human_activity_recognition_node)
+
+ # Destroy the node explicitly
+ # (optional - otherwise it will be done automatically
+ # when the garbage collector destroys the node object)
+ human_activity_recognition_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_perception/package.xml b/projects/opendr_ws_2/src/opendr_perception/package.xml
new file mode 100644
index 0000000000..a178dbd084
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/package.xml
@@ -0,0 +1,26 @@
+
+
+
+ opendr_perception
+ 2.0.0
+ OpenDR ROS2 nodes for the perception package
+ OpenDR Project Coordinator
+ Apache License v2.0
+
+ rclpy
+
+ std_msgs
+ vision_msgs
+ geometry_msgs
+
+ opendr_bridge
+
+ ament_copyright
+ ament_flake8
+ ament_pep257
+ python3-pytest
+
+
+ ament_python
+
+
diff --git a/projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/Rotate_and_Render/options/__init__.py b/projects/opendr_ws_2/src/opendr_perception/resource/opendr_perception
similarity index 100%
rename from projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/Rotate_and_Render/options/__init__.py
rename to projects/opendr_ws_2/src/opendr_perception/resource/opendr_perception
diff --git a/projects/opendr_ws_2/src/opendr_perception/setup.cfg b/projects/opendr_ws_2/src/opendr_perception/setup.cfg
new file mode 100644
index 0000000000..45e65634f1
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/setup.cfg
@@ -0,0 +1,6 @@
+[develop]
+script_dir=$base/lib/opendr_perception
+[install]
+install_scripts=$base/lib/opendr_perception
+[build_scripts]
+executable = /usr/bin/env python3
diff --git a/projects/opendr_ws_2/src/opendr_perception/setup.py b/projects/opendr_ws_2/src/opendr_perception/setup.py
new file mode 100644
index 0000000000..50aabf50a2
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/setup.py
@@ -0,0 +1,55 @@
+from setuptools import setup
+
+package_name = 'opendr_perception'
+
+setup(
+ name=package_name,
+ version='2.0.0',
+ packages=[package_name],
+ data_files=[
+ ('share/ament_index/resource_index/packages',
+ ['resource/' + package_name]),
+ ('share/' + package_name, ['package.xml']),
+ ],
+ install_requires=['setuptools'],
+ zip_safe=True,
+ maintainer='OpenDR Project Coordinator',
+ maintainer_email='tefas@csd.auth.gr',
+ description='OpenDR ROS2 nodes for the perception package',
+ license='Apache License v2.0',
+ tests_require=['pytest'],
+ entry_points={
+ 'console_scripts': [
+ 'pose_estimation = opendr_perception.pose_estimation_node:main',
+ 'hr_pose_estimation = opendr_perception.hr_pose_estimation_node:main',
+ 'object_detection_2d_centernet = opendr_perception.object_detection_2d_centernet_node:main',
+ 'object_detection_2d_detr = opendr_perception.object_detection_2d_detr_node:main',
+ 'object_detection_2d_yolov3 = opendr_perception.object_detection_2d_yolov3_node:main',
+ 'object_detection_2d_yolov5 = opendr_perception.object_detection_2d_yolov5_node:main',
+ 'object_detection_2d_ssd = opendr_perception.object_detection_2d_ssd_node:main',
+ 'object_detection_2d_nanodet = opendr_perception.object_detection_2d_nanodet_node:main',
+ 'object_detection_2d_gem = opendr_perception.object_detection_2d_gem_node:main',
+ 'object_tracking_2d_siamrpn = opendr_perception.object_tracking_2d_siamrpn_node:main',
+ 'face_detection_retinaface = opendr_perception.face_detection_retinaface_node:main',
+ 'semantic_segmentation_bisenet = opendr_perception.semantic_segmentation_bisenet_node:main',
+ 'panoptic_segmentation = opendr_perception.panoptic_segmentation_efficient_ps_node:main',
+ 'face_recognition = opendr_perception.face_recognition_node:main',
+ 'fall_detection = opendr_perception.fall_detection_node:main',
+ 'point_cloud_dataset = opendr_perception.point_cloud_dataset_node:main',
+ 'image_dataset = opendr_perception.image_dataset_node:main',
+ 'object_detection_3d_voxel = opendr_perception.object_detection_3d_voxel_node:main',
+ 'object_tracking_3d_ab3dmot = opendr_perception.object_tracking_3d_ab3dmot_node:main',
+ 'object_tracking_2d_fair_mot = opendr_perception.object_tracking_2d_fair_mot_node:main',
+ 'object_tracking_2d_deep_sort = opendr_perception.object_tracking_2d_deep_sort_node:main',
+ 'video_activity_recognition = opendr_perception.video_activity_recognition_node:main',
+ 'audiovisual_emotion_recognition = opendr_perception.audiovisual_emotion_recognition_node:main',
+ 'speech_command_recognition = opendr_perception.speech_command_recognition_node:main',
+ 'heart_anomaly_detection = opendr_perception.heart_anomaly_detection_node:main',
+ 'rgbd_hand_gestures_recognition = opendr_perception.rgbd_hand_gesture_recognition_node:main',
+ 'landmark_based_facial_expression_recognition = \
+ opendr_perception.landmark_based_facial_expression_recognition_node:main',
+ 'facial_emotion_estimation = opendr_perception.facial_emotion_estimation_node:main',
+ 'skeleton_based_action_recognition = opendr_perception.skeleton_based_action_recognition_node:main',
+ ],
+ },
+)
diff --git a/projects/opendr_ws_2/src/opendr_perception/test/test_copyright.py b/projects/opendr_ws_2/src/opendr_perception/test/test_copyright.py
new file mode 100644
index 0000000000..cc8ff03f79
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/test/test_copyright.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_copyright.main import main
+import pytest
+
+
+@pytest.mark.copyright
+@pytest.mark.linter
+def test_copyright():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found errors'
diff --git a/projects/opendr_ws_2/src/opendr_perception/test/test_flake8.py b/projects/opendr_ws_2/src/opendr_perception/test/test_flake8.py
new file mode 100644
index 0000000000..27ee1078ff
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/test/test_flake8.py
@@ -0,0 +1,25 @@
+# Copyright 2017 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_flake8.main import main_with_errors
+import pytest
+
+
+@pytest.mark.flake8
+@pytest.mark.linter
+def test_flake8():
+ rc, errors = main_with_errors(argv=[])
+ assert rc == 0, \
+ 'Found %d code style errors / warnings:\n' % len(errors) + \
+ '\n'.join(errors)
diff --git a/projects/opendr_ws_2/src/opendr_perception/test/test_pep257.py b/projects/opendr_ws_2/src/opendr_perception/test/test_pep257.py
new file mode 100644
index 0000000000..b234a3840f
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_perception/test/test_pep257.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_pep257.main import main
+import pytest
+
+
+@pytest.mark.linter
+@pytest.mark.pep257
+def test_pep257():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found code style errors / warnings'
diff --git a/projects/opendr_ws_2/src/opendr_planning/launch/end_to_end_planning_robot_launch.py b/projects/opendr_ws_2/src/opendr_planning/launch/end_to_end_planning_robot_launch.py
new file mode 100644
index 0000000000..ae61c2c1f7
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/launch/end_to_end_planning_robot_launch.py
@@ -0,0 +1,56 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import pathlib
+import launch
+from launch_ros.actions import Node
+from launch import LaunchDescription
+from ament_index_python.packages import get_package_share_directory
+from webots_ros2_driver.webots_launcher import WebotsLauncher, Ros2SupervisorLauncher
+from webots_ros2_driver.utils import controller_url_prefix
+
+
+def generate_launch_description():
+ package_dir = get_package_share_directory('opendr_planning')
+ robot_description = pathlib.Path(os.path.join(package_dir, 'resource', 'uav_robot.urdf')).read_text()
+
+ webots = WebotsLauncher(
+ world=os.path.join(package_dir, 'worlds', 'train-no-dynamic-random-obstacles.wbt')
+ )
+
+ ros2_supervisor = Ros2SupervisorLauncher()
+
+ e2e_UAV_robot_driver = Node(
+ package='webots_ros2_driver',
+ executable='driver',
+ output='screen',
+ additional_env={'WEBOTS_CONTROLLER_URL': controller_url_prefix() + 'quad_plus_sitl'},
+ parameters=[
+ {'robot_description': robot_description},
+ ]
+ )
+
+ return LaunchDescription([
+ webots,
+ e2e_UAV_robot_driver,
+ ros2_supervisor,
+ launch.actions.RegisterEventHandler(
+ event_handler=launch.event_handlers.OnProcessExit(
+ target_action=webots,
+ on_exit=[launch.actions.EmitEvent(event=launch.events.Shutdown())],
+ )
+ )
+ ])
diff --git a/projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/Rotate_and_Render/util/__init__.py b/projects/opendr_ws_2/src/opendr_planning/opendr_planning/__init__.py
similarity index 100%
rename from projects/data_generation/synthetic_multi_view_facial_image_generation/algorithm/Rotate_and_Render/util/__init__.py
rename to projects/opendr_ws_2/src/opendr_planning/opendr_planning/__init__.py
diff --git a/projects/opendr_ws_2/src/opendr_planning/opendr_planning/end_to_end_planner_node.py b/projects/opendr_ws_2/src/opendr_planning/opendr_planning/end_to_end_planner_node.py
new file mode 100755
index 0000000000..9cd8fa2c83
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/opendr_planning/end_to_end_planner_node.py
@@ -0,0 +1,98 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import rclpy
+from rclpy.node import Node
+import numpy as np
+from cv_bridge import CvBridge
+from sensor_msgs.msg import Imu, Image
+from geometry_msgs.msg import PoseStamped, PointStamped
+from opendr.planning.end_to_end_planning import EndToEndPlanningRLLearner
+from opendr.planning.end_to_end_planning.utils.euler_quaternion_transformations import euler_from_quaternion
+from opendr.planning.end_to_end_planning.utils.euler_quaternion_transformations import euler_to_quaternion
+
+
+class EndToEndPlannerNode(Node):
+
+ def __init__(self):
+ """
+ Creates a ROS Node for end-to-end planner
+ """
+ super().__init__("opendr_end_to_end_planner_node")
+ self.model_name = ""
+ self.current_pose = PoseStamped()
+ self.target_pose = PoseStamped()
+ self.current_pose.header.frame_id = "map"
+ self.target_pose.header.frame_id = "map"
+ self.bridge = CvBridge()
+ self.input_depth_image_topic = "/quad_plus_sitl/range_finder"
+ self.position_topic = "/quad_plus_sitl/gps"
+ self.orientation_topic = "/imu"
+ self.end_to_end_planner = EndToEndPlanningRLLearner(env=None)
+
+ self.ros2_pub_current_pose = self.create_publisher(PoseStamped, 'current_uav_pose', 10)
+ self.ros2_pub_target_pose = self.create_publisher(PoseStamped, 'target_uav_pose', 10)
+ self.create_subscription(Imu, self.orientation_topic, self.imu_callback, 1)
+ self.create_subscription(PointStamped, self.position_topic, self.gps_callback, 1)
+ self.create_subscription(Image, self.input_depth_image_topic, self.range_callback, 1)
+ self.get_logger().info("End-to-end planning node initialized.")
+
+ def range_callback(self, data):
+ image_arr = self.bridge.imgmsg_to_cv2(data)
+ self.range_image = ((np.clip(image_arr.reshape((64, 64, 1)), 0, 15) / 15.) * 255).astype(np.uint8)
+ observation = {'depth_cam': np.copy(self.range_image), 'moving_target': np.array([5, 0, 0])}
+ action = self.end_to_end_planner.infer(observation, deterministic=True)[0]
+ self.publish_poses(action)
+
+ def gps_callback(self, data):
+ self.current_pose.pose.position.x = -data.point.x
+ self.current_pose.pose.position.y = -data.point.y
+ self.current_pose.pose.position.z = data.point.z
+
+ def imu_callback(self, data):
+ self.current_orientation = data.orientation
+ self.current_yaw = euler_from_quaternion(data.orientation)["yaw"]
+ self.current_pose.pose.orientation = euler_to_quaternion(0, 0, yaw=self.current_yaw)
+
+ def model_name_callback(self, data):
+ if data.data[:5] == "robot":
+ self.model_name = data.data
+ if data.data[:4] == "quad":
+ self.model_name = data.data
+
+ def publish_poses(self, action):
+ self.ros2_pub_current_pose.publish(self.current_pose)
+ forward_step = np.cos(action[0] * 22.5 / 180 * np.pi)
+ side_step = np.sin(action[0] * 22.5 / 180 * np.pi)
+ yaw_step = action[1] * 22.5 / 180 * np.pi
+ self.target_pose.pose.position.x = self.current_pose.pose.position.x + forward_step * np.cos(
+ self.current_yaw) - side_step * np.sin(self.current_yaw)
+ self.target_pose.pose.position.y = self.current_pose.pose.position.y + forward_step * np.sin(
+ self.current_yaw) + side_step * np.cos(self.current_yaw)
+ self.target_pose.pose.position.z = self.current_pose.pose.position.z
+ self.target_pose.pose.orientation = euler_to_quaternion(0, 0, yaw=self.current_yaw + yaw_step)
+ self.ros2_pub_target_pose.publish(self.target_pose)
+
+
+def main(args=None):
+ rclpy.init(args=args)
+ end_to_end_planner_node = EndToEndPlannerNode()
+ rclpy.spin(end_to_end_planner_node)
+ end_to_end_planner_node.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_planning/opendr_planning/end_to_end_planning_robot_driver.py b/projects/opendr_ws_2/src/opendr_planning/opendr_planning/end_to_end_planning_robot_driver.py
new file mode 100644
index 0000000000..39394af5c7
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/opendr_planning/end_to_end_planning_robot_driver.py
@@ -0,0 +1,25 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import rclpy
+
+
+class EndToEndPlanningUAVRobotDriver:
+ def init(self, webots_node, properties):
+ rclpy.init(args=None)
+ self.__node = rclpy.create_node('end_to_end_planning_uav_robot_driver')
+
+ def step(self):
+ rclpy.spin_once(self.__node, timeout_sec=0)
diff --git a/projects/opendr_ws_2/src/opendr_planning/package.xml b/projects/opendr_ws_2/src/opendr_planning/package.xml
new file mode 100644
index 0000000000..b9a5f338c1
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/package.xml
@@ -0,0 +1,28 @@
+
+
+
+ opendr_planning
+ 2.0.0
+ OpenDR ROS2 nodes for the planning package
+ OpenDR Project Coordinator
+ Apache License v2.0
+
+ webots_ros2_driver
+
+ rclpy
+
+ std_msgs
+ vision_msgs
+ geometry_msgs
+
+ opendr_bridge
+
+ ament_copyright
+ ament_flake8
+ ament_pep257
+ python3-pytest
+
+
+ ament_python
+
+
diff --git a/projects/opendr_ws_2/src/opendr_planning/protos/box.proto b/projects/opendr_ws_2/src/opendr_planning/protos/box.proto
new file mode 100644
index 0000000000..9c34af8955
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/protos/box.proto
@@ -0,0 +1,88 @@
+#VRML_SIM R2022b utf8
+# license: Copyright Cyberbotics Ltd. Licensed for use only with Webots.
+# license url: https://cyberbotics.com/webots_assets_license
+# This bounding object with a pipe shape is formed by a group of boxes.
+PROTO box [
+ field SFFloat height 0.2 # Defines the height of the pipe.
+ field SFFloat radius 0.5 # Defines the radius of the pipe.
+ field SFFloat thickness 0.05 # Defines the thickness of the pipe.
+ field SFInt32 subdivision 8 # Defines the number of polygons used to represent the pipe and so its resolution.
+ field SFFloat accuracy 0.0001 # Defines how much boxes position can differ on y axis: a 0 value represents an error-free model but it will slow down the simulation.
+]
+{
+ %{
+ local wbrandom = require('wbrandom')
+
+ -- parameter checking
+ local subdivision = fields.subdivision.value
+ if subdivision > 200 then
+ io.stderr:write("High value for 'subdivision'. This can slow down the simulation\n")
+ elseif subdivision < 8 then
+ io.stderr:write("'subdivision' must be greater than or equal to 8\n")
+ subdivision = 8
+ end
+
+ local height = fields.height.value
+ if height <= 0 then
+ io.stderr:write("'height' must be greater than 0\n")
+ height = fields.height.defaultValue
+ end
+
+ local radius = fields.radius.value
+ if radius <= 0 then
+ io.stderr:write("'radius' must be greater than 0\n")
+ radius = fields.radius.defaultValue
+ end
+
+ local thickness = fields.thickness.value
+ if thickness <= 0 then
+ io.stderr:write("'thickness' must be greater than 0\n")
+ thickness = radius / 2
+ elseif thickness >= fields.radius.value then
+ io.stderr:write("'thickness' must be smaller than 'radius'\n")
+ thickness = radius / 2
+ end
+
+ -- global stuff before entering in the main loop
+ local beta = 2.0 * math.pi / subdivision
+ local alpha = beta / 2.0
+ local innerRadius = radius - thickness
+ local su = radius * math.cos(alpha) - innerRadius
+ if su < 0 then
+ -- fixed edge case:
+ -- There are 2 inner radius, depending if we measure it along the center or along the edge of the boxes.
+ -- If the thickness is below the difference of these two radius, then the algorithm can not achieve.
+ io.stderr:write("Either 'thickness' or 'subdivision' are too small for the box subdivision algorithm.\n")
+ su = math.abs(su)
+ end
+ local sv = height
+ local sw = radius * math.sin(alpha) * 2.0
+ local boxRadius = innerRadius + su / 2.0
+ }%
+ Group { # set of boxes
+ children [
+ %{ for i = 0, (subdivision - 1) do }%
+ %{
+ -- position of an internal box
+ local gamma = beta * i + beta / 2
+ local ax = boxRadius * math.sin(gamma)
+ local ay = 0
+ local az = boxRadius * math.cos(gamma)
+ local angle = gamma + 0.5 * math.pi
+ -- add small offset to boxes y translation to reduce constraints
+ -- on the top and bottom face due to co-planarity
+ local offset = wbrandom.real(-1.0, 1.0) * fields.accuracy.value;
+ }%
+ Transform {
+ translation %{= ax}% %{= ay + offset }% %{= az}%
+ rotation 0 1 0 %{= angle }%
+ children [
+ Box {
+ size %{= su}% %{= sv}% %{= sw}%
+ }
+ ]
+ }
+ %{ end }%
+ ]
+ }
+}
diff --git a/projects/perception/__init__.py b/projects/opendr_ws_2/src/opendr_planning/resource/opendr_planning
similarity index 100%
rename from projects/perception/__init__.py
rename to projects/opendr_ws_2/src/opendr_planning/resource/opendr_planning
diff --git a/projects/opendr_ws_2/src/opendr_planning/resource/uav_robot.urdf b/projects/opendr_ws_2/src/opendr_planning/resource/uav_robot.urdf
new file mode 100644
index 0000000000..7b99a8080c
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/resource/uav_robot.urdf
@@ -0,0 +1,34 @@
+
+
+
+
+
+ true
+ true
+
+
+
+
+
+ true
+ true
+
+
+
+
+
+ true
+ true
+
+
+
+
+ true
+ /imu
+ true
+ inertial_unit
+
+
+
+
+
diff --git a/projects/opendr_ws_2/src/opendr_planning/setup.cfg b/projects/opendr_ws_2/src/opendr_planning/setup.cfg
new file mode 100644
index 0000000000..35a3135d7e
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/setup.cfg
@@ -0,0 +1,6 @@
+[develop]
+script_dir=$base/lib/opendr_planning
+[install]
+install_scripts=$base/lib/opendr_planning
+[build_scripts]
+executable = /usr/bin/env python3
diff --git a/projects/opendr_ws_2/src/opendr_planning/setup.py b/projects/opendr_ws_2/src/opendr_planning/setup.py
new file mode 100644
index 0000000000..37cb78733e
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/setup.py
@@ -0,0 +1,31 @@
+from setuptools import setup
+
+package_name = 'opendr_planning'
+data_files = []
+data_files.append(('share/ament_index/resource_index/packages', ['resource/' + package_name]))
+data_files.append(('share/' + package_name + '/launch', ['launch/end_to_end_planning_robot_launch.py']))
+data_files.append(('share/' + package_name + '/worlds', ['worlds/train-no-dynamic-random-obstacles.wbt']))
+data_files.append(('share/' + package_name + '/protos', ['protos/box.proto']))
+data_files.append(('share/' + package_name + '/resource', ['resource/uav_robot.urdf']))
+data_files.append(('share/' + package_name, ['package.xml']))
+
+
+setup(
+ name=package_name,
+ version='2.0.0',
+ packages=[package_name],
+ data_files=data_files,
+ install_requires=['setuptools'],
+ zip_safe=True,
+ maintainer='OpenDR Project Coordinator',
+ maintainer_email='tefas@csd.auth.gr',
+ description='OpenDR ROS2 nodes for the planning package',
+ license='Apache License v2.0',
+ tests_require=['pytest'],
+ entry_points={
+ 'console_scripts': [
+ 'end_to_end_planner = opendr_planning.end_to_end_planner_node:main',
+ 'end_to_end_planning_robot_driver = opendr_planning.end_to_end_planning_robot_driver:main',
+ ],
+ },
+)
diff --git a/projects/opendr_ws_2/src/opendr_planning/test/test_copyright.py b/projects/opendr_ws_2/src/opendr_planning/test/test_copyright.py
new file mode 100644
index 0000000000..cc8ff03f79
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/test/test_copyright.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_copyright.main import main
+import pytest
+
+
+@pytest.mark.copyright
+@pytest.mark.linter
+def test_copyright():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found errors'
diff --git a/projects/opendr_ws_2/src/opendr_planning/test/test_flake8.py b/projects/opendr_ws_2/src/opendr_planning/test/test_flake8.py
new file mode 100644
index 0000000000..27ee1078ff
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/test/test_flake8.py
@@ -0,0 +1,25 @@
+# Copyright 2017 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_flake8.main import main_with_errors
+import pytest
+
+
+@pytest.mark.flake8
+@pytest.mark.linter
+def test_flake8():
+ rc, errors = main_with_errors(argv=[])
+ assert rc == 0, \
+ 'Found %d code style errors / warnings:\n' % len(errors) + \
+ '\n'.join(errors)
diff --git a/projects/opendr_ws_2/src/opendr_planning/test/test_pep257.py b/projects/opendr_ws_2/src/opendr_planning/test/test_pep257.py
new file mode 100644
index 0000000000..b234a3840f
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/test/test_pep257.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_pep257.main import main
+import pytest
+
+
+@pytest.mark.linter
+@pytest.mark.pep257
+def test_pep257():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found code style errors / warnings'
diff --git a/projects/opendr_ws_2/src/opendr_planning/worlds/train-no-dynamic-random-obstacles.wbt b/projects/opendr_ws_2/src/opendr_planning/worlds/train-no-dynamic-random-obstacles.wbt
new file mode 100644
index 0000000000..aff3322fe9
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_planning/worlds/train-no-dynamic-random-obstacles.wbt
@@ -0,0 +1,503 @@
+#VRML_SIM R2022b utf8
+
+EXTERNPROTO "https://raw.githubusercontent.com/cyberbotics/webots/R2022b/projects/appearances/protos/Grass.proto"
+EXTERNPROTO "https://raw.githubusercontent.com/cyberbotics/webots/R2022b/projects/appearances/protos/Parquetry.proto"
+EXTERNPROTO "https://raw.githubusercontent.com/cyberbotics/webots/R2022b/projects/objects/floors/protos/Floor.proto"
+EXTERNPROTO "https://raw.githubusercontent.com/cyberbotics/webots/R2022b/projects/objects/apartment_structure/protos/Wall.proto"
+EXTERNPROTO "../protos/box.proto"
+
+WorldInfo {
+ gravity 9.80665
+ basicTimeStep 1
+ FPS 15
+ optimalThreadCount 4
+ randomSeed 52
+}
+Viewpoint {
+ orientation 0.2493542513111129 -0.0015806740935321666 -0.9684110484822468 3.0320770615235597
+ position 31.77129355822201 3.9289180767659815 21.40152949153122
+ followType "Mounted Shot"
+}
+DEF DEF_VEHICLE Robot {
+ translation -3.20133 -0.667551 2.5
+ rotation 0.5387460067434838 -0.5957150074565648 -0.5957150074565648 2.15327
+ children [
+ Lidar {
+ translation 0 0.07 0
+ rotation 3.4621799999783786e-06 -0.999999999993755 -7.095049999955691e-07 3.14159
+ horizontalResolution 32
+ fieldOfView 1.57
+ verticalFieldOfView 0.1
+ numberOfLayers 1
+ minRange 0.3
+ maxRange 5
+ }
+ RangeFinder {
+ translation 0 0.1 0
+ rotation -0.5773502691896258 -0.5773502691896258 -0.5773502691896258 2.0943951023931957
+ maxRange 15
+ }
+ TouchSensor {
+ translation 0 0.03 0
+ rotation 0 1 0 1.5708
+ name "touch sensor-collision"
+ boundingObject box {
+ }
+ }
+ TouchSensor {
+ translation 0 0.03 0.5
+ rotation 0 1 0 1.5708
+ name "touch sensor-safety1"
+ boundingObject box {
+ radius 1
+ subdivision 12
+ }
+ }
+ TouchSensor {
+ translation 0 0.03 1
+ rotation 0 1 0 1.5708
+ name "touch sensor-safety2"
+ boundingObject box {
+ radius 1.5
+ subdivision 16
+ }
+ }
+ Receiver {
+ name "receiver_main"
+ type "serial"
+ channel 1
+ bufferSize 32
+ }
+ Emitter {
+ name "emitter_plugin"
+ description "commuicates with physics plugin"
+ }
+ Shape {
+ appearance Appearance {
+ material Material {
+ }
+ }
+ geometry Box {
+ size 0.1 0.1 0.1
+ }
+ }
+ Camera {
+ translation 0 0.12 0
+ rotation 0.1294279597735375 0.9831056944488314 0.1294279597735375 -1.58783
+ name "camera1"
+ width 128
+ height 128
+ }
+ Compass {
+ name "compass1"
+ }
+ GPS {
+ name "gps"
+ }
+ Accelerometer {
+ name "accelerometer1"
+ }
+ InertialUnit {
+ rotation 0 1 0 1.5707947122222805
+ name "inertial_unit"
+ }
+ Gyro {
+ name "gyro1"
+ }
+ Transform {
+ translation 0 0 0.1
+ children [
+ Shape {
+ appearance Appearance {
+ material Material {
+ }
+ }
+ geometry DEF DEF_ARM Cylinder {
+ height 0.1
+ radius 0.01
+ }
+ }
+ ]
+ }
+ Transform {
+ translation -0.09999999999999999 0 0
+ rotation -0.7071067811865476 0 0.7071067811865476 -3.1415923071795864
+ children [
+ Shape {
+ appearance Appearance {
+ material Material {
+ }
+ }
+ geometry USE DEF_ARM
+ }
+ ]
+ }
+ Transform {
+ translation 0.09999999999999999 0 0
+ rotation 0 -1 0 -1.5707963071795863
+ children [
+ Shape {
+ appearance Appearance {
+ material Material {
+ diffuseColor 1 0.09999999999999999 0
+ }
+ }
+ geometry USE DEF_ARM
+ }
+ ]
+ }
+ Transform {
+ translation 0 0 -0.1
+ children [
+ Shape {
+ appearance Appearance {
+ material Material {
+ diffuseColor 0.7999999999999999 0.7999999999999999 0.7999999999999999
+ }
+ }
+ geometry USE DEF_ARM
+ }
+ ]
+ }
+ ]
+ name "quad_plus_sitl"
+ boundingObject Box {
+ size 0.1 0.1 0.1
+ }
+ rotationStep 0.261799
+ controller ""
+ customData "1"
+ supervisor TRUE
+}
+Background {
+ skyColor [
+ 0.15 0.5 1
+ ]
+}
+DirectionalLight {
+}
+Floor {
+ translation 0 0 -1
+ rotation 0 0 1 1.5707963267948966
+ size 500 750
+ appearance Grass {
+ }
+}
+Floor {
+ translation -4 0 -0.96
+ rotation 0 0 1 1.5707963267948966
+ name "floor(13)"
+ size 0.5 30
+ appearance Parquetry {
+ type "dark strip"
+ }
+}
+Floor {
+ translation -8 -14 -0.98
+ rotation 0 0 1 1.5707963267948966
+ name "floor(5)"
+ size 100 50
+ appearance PBRAppearance {
+ baseColor 0.6 0.8 0.6
+ roughness 1
+ }
+}
+DEF cyl1 Solid {
+ translation -13.30571834554473 -1.447574483178714 2.7665126217916747
+ rotation 0.7046199859242116 -0.2718054272768975 -0.6554635650735948 1.3264162624880482
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF cyl_geo1 Cylinder {
+ height 1.6358972201698152
+ radius 0.8305567381873773
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(6)"
+ boundingObject USE cyl_geo1
+}
+DEF cyl2 Solid {
+ translation -11.573784058504305 -0.5709706439613236 2.7898036661292727
+ rotation 0.80041453284557 -0.23379069518091386 -0.5519768894224041 3.004019614452083
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF cyl_geo2 Cylinder {
+ height 1.5666220746502095
+ radius 1.4073464879682038
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(16)"
+ boundingObject USE cyl_geo2
+}
+DEF cyl3 Solid {
+ translation 6.495757807871515 -1.6144414097525925 2.055833951531991
+ rotation 0.9501520694787192 0.1803287878394691 -0.254347347424059 1.1144016628344635
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF cyl_geo3 Cylinder {
+ height 2.9932008423005847
+ radius 1.3817552987759123
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(17)"
+ boundingObject USE cyl_geo3
+}
+DEF cyl4 Solid {
+ translation 0 0 -10
+ rotation 0.8826129905240483 -0.436261871860521 0.17512820480707927 -3.0124718491193443
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF cyl_geo4 Cylinder {
+ height 2.040387292247227
+ radius 1.7321406926258653
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(18)"
+ boundingObject USE cyl_geo4
+}
+DEF cyl5 Solid {
+ translation 0 0 -10
+ rotation -0.3917242543263733 0.07876246896092191 -0.9167052863683216 0.9303512269603899
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF cyl_geo5 Cylinder {
+ height 2.4768414116000366
+ radius 0.5824817005442169
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(19)"
+ boundingObject USE cyl_geo5
+}
+DEF box1 Solid {
+ translation 4.4381089093275685 0.5548170365208641 2.05131692563986
+ rotation 0.2448556165007751 0.9367176515026089 0.2502114474428831 -2.914945226248721
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF box_geo1 Box {
+ size 0.8334023756695101 0.6127140086440774 2.1756103342302913
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(20)"
+ boundingObject USE box_geo1
+}
+DEF box2 Solid {
+ translation 0 0 -10
+ rotation -0.7163183367896099 0.6204835974021974 0.31919922577254956 2.929261604379051
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF box_geo2 Box {
+ size 1.6555731912544518 0.8528384366701209 1.5923867066800264
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(21)"
+ boundingObject USE box_geo2
+}
+DEF box3 Solid {
+ translation 0 0 -10
+ rotation 0.492702975086357 0.008495842259129496 0.8701560773823055 -3.124774550627343
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF box_geo3 Box {
+ size 1.114861834585034 1.9899789593315744 1.665194050916234
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(22)"
+ boundingObject USE box_geo3
+}
+DEF box4 Solid {
+ translation 0 0 -10
+ rotation -0.47381905460959706 -0.5794103506313973 0.6631584645241805 -2.2430503148315895
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF box_geo4 Box {
+ size 1.6228519285122363 1.1501776483206156 2.2316284316140305
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(23)"
+ boundingObject USE box_geo4
+}
+DEF box5 Solid {
+ translation 0 0 -10
+ rotation 0.1849655628048051 0.930668272300889 0.3156648658130647 3.098971634530017
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF box_geo5 Box {
+ size 2.198602344698272 0.9299983006419481 1.8591651370902504
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(24)"
+ boundingObject USE box_geo5
+}
+DEF sph1 Solid {
+ translation -19.257198265348357 -3.1661159326488217 2.225830049481242
+ rotation 0.46953082387497425 0.2604920627631049 0.8436140650017107 -2.2344190120762484
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF sph_geo1 Sphere {
+ radius 1.35574388768385
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(25)"
+ boundingObject USE sph_geo1
+}
+DEF sph2 Solid {
+ translation 0.2181211849140201 -0.5886797657584887 2.5285623758667715
+ rotation 0.46953082387497425 0.2604920627631049 0.8436140650017107 -2.2344190120762484
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF sph_geo2 Sphere {
+ radius 1.365103979645272
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(26)"
+ boundingObject USE sph_geo2
+}
+DEF sph3 Solid {
+ translation 0 0 -10
+ rotation 0.46953082387497425 0.2604920627631049 0.8436140650017107 -2.2344190120762484
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF sph_geo3 Sphere {
+ radius 1.5576301083903183
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(27)"
+ boundingObject USE sph_geo3
+}
+DEF sph4 Solid {
+ translation 0 0 -10
+ rotation 0.46953082387497425 0.2604920627631049 0.8436140650017107 -2.2344190120762484
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF sph_geo4 Sphere {
+ radius 1.8204413448018755
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(28)"
+ boundingObject USE sph_geo4
+}
+DEF sph5 Solid {
+ translation 0 0 -10
+ rotation 0.46953082387497425 0.2604920627631049 0.8436140650017107 -2.2344190120762484
+ children [
+ Shape {
+ appearance PBRAppearance {
+ baseColor 0.6 0.3 0.0235294
+ roughness 1
+ metalness 0
+ }
+ geometry DEF sph_geo5 Sphere {
+ radius 2.2713871330568587
+ }
+ castShadows FALSE
+ }
+ ]
+ name "solid(29)"
+ boundingObject USE sph_geo5
+}
+DEF wall1 Wall {
+ translation -4 -4.602323054921962 -9
+ size 30 0.1 7
+}
+DEF wall2 Wall {
+ translation -4 4.602323054921962 -9
+ name "wall(2)"
+ size 30 0.1 7
+}
diff --git a/projects/perception/activity_recognition/demos/online_recognition/activity_recognition/__init__.py b/projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/__init__.py
similarity index 100%
rename from projects/perception/activity_recognition/demos/online_recognition/activity_recognition/__init__.py
rename to projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/__init__.py
diff --git a/projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py b/projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py
new file mode 100644
index 0000000000..c4ca320acc
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py
@@ -0,0 +1,108 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import rclpy
+from rclpy.node import Node
+
+import cv2
+import os
+import argparse
+from cv_bridge import CvBridge
+from opendr_bridge import ROS2Bridge
+from std_msgs.msg import Bool
+from opendr_interface.srv import ImgToMesh
+from opendr.simulation.human_model_generation.utilities.model_3D import Model_3D
+
+
+class HumanModelGenerationClient(Node):
+
+ def __init__(self, service_name="human_model_generation"):
+ """
+ Creates a ROS Client for human model generation
+ :param service_name: The name of the service
+ :type service_name: str
+ """
+ super().__init__('human_model_generation_client')
+ self.bridge_cv = CvBridge()
+ self.bridge_ros = ROS2Bridge()
+ self.cli = self.create_client(ImgToMesh, service_name)
+ while not self.cli.wait_for_service(timeout_sec=1.0):
+ self.get_logger().info('service not available, waiting again...')
+ self.req = ImgToMesh.Request()
+
+ def send_request(self, img_rgb, img_msk, extract_pose):
+ """
+ Send request to service assigned with the task to generate a human model from an image
+ :param img_rgb: The RGB image depicting a human
+ :type img_rgb: engine.data.Image
+ :param img_msk: The image, used as mask, for depicting a human's silhouette
+ :type img_msk: engine.data.Image
+ :param extract_pose: Defines whether to extract the pose of the depicted human or not
+ :type extract_pose: bool
+ :return: A tuple containing the generated human model and the extracted 3D pose
+ :rtype: tuple, (opendr.simulation.human_model_generation.utilities.model_3D.Model_3D, engine.target.Pose)
+ """
+ extract_pose_ros = Bool()
+ extract_pose_ros.data = extract_pose
+ self.req.img_rgb = self.bridge_cv.cv2_to_imgmsg(img_rgb, encoding="bgr8")
+ self.req.img_msk = self.bridge_cv.cv2_to_imgmsg(img_msk, encoding="bgr8")
+ self.req.extract_pose = extract_pose_ros
+ self.future = self.cli.call_async(self.req)
+ rclpy.spin_until_future_complete(self, self.future)
+ resp = self.future.result()
+ pose = self.bridge_ros.from_ros_pose_3D(resp.pose)
+ vertices, triangles = self.bridge_ros.from_ros_mesh(resp.mesh)
+ vertex_colors = self.bridge_ros.from_ros_colors(resp.vertex_colors)
+ human_model = Model_3D(vertices, triangles, vertex_colors)
+ return human_model, pose
+
+
+def main():
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--srv_name", help="The name of the service",
+ type=str, default="human_model_generation")
+ parser.add_argument("--img_rgb", help="Path for RGB image", type=str,
+ default=os.path.join(os.environ['OPENDR_HOME'], 'projects/python/simulation/'
+ 'human_model_generation/demos/'
+ 'imgs_input/rgb/result_0004.jpg'))
+ parser.add_argument("--img_msk", help="Path for mask image", type=str,
+ default=os.path.join(os.environ['OPENDR_HOME'], 'projects/python/simulation/'
+ 'human_model_generation/demos/'
+ 'imgs_input/msk/result_0004.jpg'))
+ parser.add_argument("--rot_angles", help="Yaw angles for rotating the generated model",
+ nargs="+", default=['30', '120'])
+ parser.add_argument("--extract_pose", help="Whether to extract pose or not", action='store_true')
+ parser.add_argument("--plot_kps", help="Whether to plot the keypoints of the extracted pose",
+ action='store_true')
+ parser.add_argument("--out_path", help="Path for outputting the renderings/models", type=str,
+ default=os.path.join(os.environ['OPENDR_HOME'], 'projects/opendr_ws_2'))
+ args = parser.parse_args()
+ rot_angles = [int(x) for x in args.rot_angles]
+ img_rgb = cv2.imread(args.img_rgb)
+ img_msk = cv2.imread(args.img_msk)
+ rclpy.init()
+ client = HumanModelGenerationClient(service_name=args.srv_name)
+ [human_model, pose] = client.send_request(img_rgb, img_msk, extract_pose=args.extract_pose)
+ human_model.save_obj_mesh(os.path.join(args.out_path, 'human_model.obj'))
+ [out_imgs, _] = human_model.get_img_views(rot_angles, human_pose_3D=pose, plot_kps=args.plot_kps)
+ for i, out_img in enumerate(out_imgs):
+ cv2.imwrite(os.path.join(args.out_path, 'rendering' + str(rot_angles[i]) + '.jpg'), out_imgs[i].opencv())
+ client.destroy_node()
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py b/projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py
new file mode 100644
index 0000000000..39d1a97fa6
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py
@@ -0,0 +1,109 @@
+#!/usr/bin/env python
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import rclpy
+from rclpy.node import Node
+import argparse
+import os
+import torch
+from opendr_bridge import ROS2Bridge
+from opendr.simulation.human_model_generation.pifu_generator_learner import PIFuGeneratorLearner
+from opendr_interface.srv import ImgToMesh
+from opendr.engine.target import Pose
+from rclpy.callback_groups import MutuallyExclusiveCallbackGroup
+
+
+class PifuService(Node):
+
+ def __init__(self, service_name='human_model_generation', device="cuda", checkpoint_dir='.'):
+ """
+ Creates a ROS Service for human model generation
+ :param service_name: The name of the service
+ :type service_name: str
+ :param device: device on which we are running inference ('cpu' or 'cuda')
+ :type device: str
+ :param checkpoint_dir: the directory where the PIFu weights will be downloaded/loaded
+ :type checkpoint_dir: str
+ """
+ super().__init__('human_model_generation_service')
+ self.bridge = ROS2Bridge()
+ self.service_name = service_name
+ # Initialize the pose estimation
+ self.model_generator = PIFuGeneratorLearner(device=device, checkpoint_dir=checkpoint_dir)
+ my_callback_group = MutuallyExclusiveCallbackGroup()
+
+ self.srv = self.create_service(ImgToMesh, 'human_model_generation', self.gen_callback, callback_group=my_callback_group)
+
+ def gen_callback(self, request, response):
+ """
+ Callback that process the input data and publishes to the corresponding topics
+ :param request: The service request
+ :type request: SrvTypeRequest
+ :param response: SrvTypeResponse
+ :type response: The service response
+ :return response: SrvTypeResponse
+ :type response: The service response
+ """
+ img_rgb = self.bridge.from_ros_image(request.img_rgb)
+ img_msk = self.bridge.from_ros_image(request.img_msk)
+ extract_pose = request.extract_pose.data
+ output = self.model_generator.infer([img_rgb], [img_msk], extract_pose=extract_pose)
+ if extract_pose is True:
+ model_3D = output[0]
+ pose = output[1]
+ else:
+ model_3D = output
+ pose = Pose([], 0.0)
+ verts = model_3D.get_vertices()
+ faces = model_3D.get_faces()
+ vert_colors = model_3D.vert_colors
+ response.mesh = self.bridge.to_ros_mesh(verts, faces)
+ response.vertex_colors = self.bridge.to_ros_colors(vert_colors)
+ response.pose = self.bridge.to_ros_pose_3D(pose)
+ return response
+
+
+def main():
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
+ type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--srv_name", help="The name of the service",
+ type=str, default="human_model_generation")
+ parser.add_argument("--checkpoint_dir", help="Path to directory for the checkpoints of the method's network",
+ type=str, default=os.path.join(os.environ['OPENDR_HOME'], 'projects/opendr_ws_2'))
+ args = parser.parse_args()
+
+ try:
+ if args.device == "cuda" and torch.cuda.is_available():
+ device = "cuda"
+ elif args.device == "cuda":
+ print("GPU not found. Using CPU instead.")
+ device = "cpu"
+ else:
+ print("Using CPU.")
+ device = "cpu"
+ except:
+ print("Using CPU.")
+ device = "cpu"
+
+ rclpy.init()
+ pifu_service = PifuService(service_name=args.srv_name, device=device, checkpoint_dir=args.checkpoint_dir)
+ rclpy.spin(pifu_service)
+ rclpy.shutdown()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/projects/opendr_ws_2/src/opendr_simulation/package.xml b/projects/opendr_ws_2/src/opendr_simulation/package.xml
new file mode 100644
index 0000000000..bcba4eab8d
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/package.xml
@@ -0,0 +1,36 @@
+
+
+
+ opendr_simulation
+ 2.0.0
+ OpenDR ROS2 nodes for the simulation package
+ OpenDR Project Coordinator
+ Apache License v2.0
+ std_msgs
+ shape_msgs
+ sensor_msgs
+ vision_msgs
+ ament_cmake
+ rosidl_default_generators
+ rosidl_default_runtime
+ opendr_interface
+ rclpy
+ opendr_bridge
+ rosidl_interface_packages
+
+ ament_copyright
+ ament_flake8
+ ament_pep257
+ python3-pytest
+ ament_lint_auto
+ ament_lint_common
+
+
+ ament_python
+
+
+
+
+
+
+
diff --git a/projects/perception/lightweight_open_pose/jetbot/utils/__init__.py b/projects/opendr_ws_2/src/opendr_simulation/resource/opendr_simulation
similarity index 100%
rename from projects/perception/lightweight_open_pose/jetbot/utils/__init__.py
rename to projects/opendr_ws_2/src/opendr_simulation/resource/opendr_simulation
diff --git a/projects/opendr_ws_2/src/opendr_simulation/setup.cfg b/projects/opendr_ws_2/src/opendr_simulation/setup.cfg
new file mode 100644
index 0000000000..58800215e6
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/setup.cfg
@@ -0,0 +1,6 @@
+[develop]
+script_dir=$base/lib/opendr_simulation
+[install]
+install_scripts=$base/lib/opendr_simulation
+[build_scripts]
+executable = /usr/bin/env python3
diff --git a/projects/opendr_ws_2/src/opendr_simulation/setup.py b/projects/opendr_ws_2/src/opendr_simulation/setup.py
new file mode 100644
index 0000000000..0cd2cca844
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/setup.py
@@ -0,0 +1,27 @@
+from setuptools import setup
+
+package_name = 'opendr_simulation'
+
+setup(
+ name=package_name,
+ version='2.0.0',
+ packages=[package_name],
+ data_files=[
+ ('share/ament_index/resource_index/packages',
+ ['resource/' + package_name]),
+ ('share/' + package_name, ['package.xml']),
+ ],
+ install_requires=['setuptools'],
+ zip_safe=True,
+ maintainer='OpenDR Project Coordinator',
+ maintainer_email='tefas@csd.auth.gr',
+ description='OpenDR ROS2 nodes for the simulation package',
+ license='Apache License v2.0',
+ tests_require=['pytest'],
+ entry_points={
+ 'console_scripts': [
+ 'human_model_generation_service = opendr_simulation.human_model_generation_service:main',
+ 'human_model_generation_client = opendr_simulation.human_model_generation_client:main'
+ ],
+ },
+)
diff --git a/projects/opendr_ws_2/src/opendr_simulation/test/test_copyright.py b/projects/opendr_ws_2/src/opendr_simulation/test/test_copyright.py
new file mode 100644
index 0000000000..cc8ff03f79
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/test/test_copyright.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_copyright.main import main
+import pytest
+
+
+@pytest.mark.copyright
+@pytest.mark.linter
+def test_copyright():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found errors'
diff --git a/projects/opendr_ws_2/src/opendr_simulation/test/test_flake8.py b/projects/opendr_ws_2/src/opendr_simulation/test/test_flake8.py
new file mode 100644
index 0000000000..27ee1078ff
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/test/test_flake8.py
@@ -0,0 +1,25 @@
+# Copyright 2017 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_flake8.main import main_with_errors
+import pytest
+
+
+@pytest.mark.flake8
+@pytest.mark.linter
+def test_flake8():
+ rc, errors = main_with_errors(argv=[])
+ assert rc == 0, \
+ 'Found %d code style errors / warnings:\n' % len(errors) + \
+ '\n'.join(errors)
diff --git a/projects/opendr_ws_2/src/opendr_simulation/test/test_pep257.py b/projects/opendr_ws_2/src/opendr_simulation/test/test_pep257.py
new file mode 100644
index 0000000000..b234a3840f
--- /dev/null
+++ b/projects/opendr_ws_2/src/opendr_simulation/test/test_pep257.py
@@ -0,0 +1,23 @@
+# Copyright 2015 Open Source Robotics Foundation, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from ament_pep257.main import main
+import pytest
+
+
+@pytest.mark.linter
+@pytest.mark.pep257
+def test_pep257():
+ rc = main(argv=['.', 'test'])
+ assert rc == 0, 'Found code style errors / warnings'
diff --git a/projects/python/README.md b/projects/python/README.md
new file mode 100644
index 0000000000..b1a72da808
--- /dev/null
+++ b/projects/python/README.md
@@ -0,0 +1,6 @@
+# Python usage examples and tutorials
+
+
+This folder contains several usage examples and tutorials that demonstrate the functionalities of OpenDR toolkit.
+The usage examples follow the same structure as the Python packages that are provided by OpenDR, i.e., they are provided separately for [perception](perception), [control](control) and [simulation](simulation) tools.
+Furthermore, usage examples of other utilities are provided in [utils](utils).
diff --git a/projects/perception/object_detection_3d/demos/voxel_object_detection_3d/__init__.py b/projects/python/__init__.py
similarity index 100%
rename from projects/perception/object_detection_3d/demos/voxel_object_detection_3d/__init__.py
rename to projects/python/__init__.py
diff --git a/projects/control/eagerx/README.md b/projects/python/control/eagerx/README.md
similarity index 97%
rename from projects/control/eagerx/README.md
rename to projects/python/control/eagerx/README.md
index 26825812a6..0a63adce48 100644
--- a/projects/control/eagerx/README.md
+++ b/projects/python/control/eagerx/README.md
@@ -22,7 +22,7 @@ Specifically the following examples are provided:
Example usage:
```bash
-cd $OPENDR_HOME/projects/control/eagerx/demos
+cd $OPENDR_HOME/projects/python/control/eagerx/demos
python3 [demo_name]
```
diff --git a/projects/control/eagerx/data/with_actions.h5 b/projects/python/control/eagerx/data/with_actions.h5
similarity index 100%
rename from projects/control/eagerx/data/with_actions.h5
rename to projects/python/control/eagerx/data/with_actions.h5
diff --git a/projects/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/src/__init__.py b/projects/python/control/eagerx/demos/__init__.py
similarity index 100%
rename from projects/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/src/__init__.py
rename to projects/python/control/eagerx/demos/__init__.py
diff --git a/projects/control/eagerx/demos/demo_classifier.py b/projects/python/control/eagerx/demos/demo_classifier.py
similarity index 100%
rename from projects/control/eagerx/demos/demo_classifier.py
rename to projects/python/control/eagerx/demos/demo_classifier.py
diff --git a/projects/control/eagerx/demos/demo_full_state.py b/projects/python/control/eagerx/demos/demo_full_state.py
similarity index 100%
rename from projects/control/eagerx/demos/demo_full_state.py
rename to projects/python/control/eagerx/demos/demo_full_state.py
diff --git a/projects/control/eagerx/demos/demo_pid.py b/projects/python/control/eagerx/demos/demo_pid.py
similarity index 100%
rename from projects/control/eagerx/demos/demo_pid.py
rename to projects/python/control/eagerx/demos/demo_pid.py
diff --git a/projects/control/eagerx/dependencies.ini b/projects/python/control/eagerx/dependencies.ini
similarity index 100%
rename from projects/control/eagerx/dependencies.ini
rename to projects/python/control/eagerx/dependencies.ini
diff --git a/projects/control/mobile_manipulation/CMakeLists.txt b/projects/python/control/mobile_manipulation/CMakeLists.txt
similarity index 100%
rename from projects/control/mobile_manipulation/CMakeLists.txt
rename to projects/python/control/mobile_manipulation/CMakeLists.txt
diff --git a/projects/control/mobile_manipulation/README.md b/projects/python/control/mobile_manipulation/README.md
similarity index 100%
rename from projects/control/mobile_manipulation/README.md
rename to projects/python/control/mobile_manipulation/README.md
diff --git a/projects/control/mobile_manipulation/best_defaults.yaml b/projects/python/control/mobile_manipulation/best_defaults.yaml
similarity index 100%
rename from projects/control/mobile_manipulation/best_defaults.yaml
rename to projects/python/control/mobile_manipulation/best_defaults.yaml
diff --git a/projects/control/mobile_manipulation/mobile_manipulation_demo.py b/projects/python/control/mobile_manipulation/mobile_manipulation_demo.py
similarity index 100%
rename from projects/control/mobile_manipulation/mobile_manipulation_demo.py
rename to projects/python/control/mobile_manipulation/mobile_manipulation_demo.py
diff --git a/projects/control/mobile_manipulation/package.xml b/projects/python/control/mobile_manipulation/package.xml
similarity index 100%
rename from projects/control/mobile_manipulation/package.xml
rename to projects/python/control/mobile_manipulation/package.xml
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax.dae b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax.dae
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax.dae
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax.dae
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1.dae b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1.dae
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1.dae
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1.dae
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1_tex_0.jpg b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1_tex_0.jpg
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1_tex_0.jpg
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer1_tex_0.jpg
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2.dae b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2.dae
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2.dae
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2.dae
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2_tex_0.jpg b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2_tex_0.jpg
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2_tex_0.jpg
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/KallaxDrawer2_tex_0.jpg
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax_tex_0.jpg b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax_tex_0.jpg
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax_tex_0.jpg
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/meshes/Kallax_tex_0.jpg
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/model.config b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/model.config
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/model.config
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/model.config
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax/model.sdf b/projects/python/control/mobile_manipulation/robots_world/models/Kallax/model.sdf
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax/model.sdf
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax/model.sdf
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax.dae b/projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax.dae
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax.dae
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax.dae
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer.dae b/projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer.dae
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer.dae
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer.dae
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer_tex_0.jpg b/projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer_tex_0.jpg
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer_tex_0.jpg
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_Tuer_tex_0.jpg
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_tex_0.jpg b/projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_tex_0.jpg
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_tex_0.jpg
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax2/meshes/Kallax_tex_0.jpg
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax2/model.config b/projects/python/control/mobile_manipulation/robots_world/models/Kallax2/model.config
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax2/model.config
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax2/model.config
diff --git a/projects/control/mobile_manipulation/robots_world/models/Kallax2/model.sdf b/projects/python/control/mobile_manipulation/robots_world/models/Kallax2/model.sdf
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/Kallax2/model.sdf
rename to projects/python/control/mobile_manipulation/robots_world/models/Kallax2/model.sdf
diff --git a/projects/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli1_tex_0.jpg b/projects/python/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli1_tex_0.jpg
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli1_tex_0.jpg
rename to projects/python/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli1_tex_0.jpg
diff --git a/projects/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli2.dae b/projects/python/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli2.dae
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli2.dae
rename to projects/python/control/mobile_manipulation/robots_world/models/muesli2/meshes/muesli2.dae
diff --git a/projects/control/mobile_manipulation/robots_world/models/muesli2/model.config b/projects/python/control/mobile_manipulation/robots_world/models/muesli2/model.config
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/muesli2/model.config
rename to projects/python/control/mobile_manipulation/robots_world/models/muesli2/model.config
diff --git a/projects/control/mobile_manipulation/robots_world/models/muesli2/model.sdf b/projects/python/control/mobile_manipulation/robots_world/models/muesli2/model.sdf
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/muesli2/model.sdf
rename to projects/python/control/mobile_manipulation/robots_world/models/muesli2/model.sdf
diff --git a/projects/control/mobile_manipulation/robots_world/models/reemc_table_low/model.config b/projects/python/control/mobile_manipulation/robots_world/models/reemc_table_low/model.config
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/reemc_table_low/model.config
rename to projects/python/control/mobile_manipulation/robots_world/models/reemc_table_low/model.config
diff --git a/projects/control/mobile_manipulation/robots_world/models/reemc_table_low/table.sdf b/projects/python/control/mobile_manipulation/robots_world/models/reemc_table_low/table.sdf
similarity index 100%
rename from projects/control/mobile_manipulation/robots_world/models/reemc_table_low/table.sdf
rename to projects/python/control/mobile_manipulation/robots_world/models/reemc_table_low/table.sdf
diff --git a/projects/control/mobile_manipulation/rviz_config.rviz b/projects/python/control/mobile_manipulation/rviz_config.rviz
similarity index 100%
rename from projects/control/mobile_manipulation/rviz_config.rviz
rename to projects/python/control/mobile_manipulation/rviz_config.rviz
diff --git a/projects/control/single_demo_grasp/README.md b/projects/python/control/single_demo_grasp/README.md
similarity index 78%
rename from projects/control/single_demo_grasp/README.md
rename to projects/python/control/single_demo_grasp/README.md
index d28ef3d661..0486c939e0 100755
--- a/projects/control/single_demo_grasp/README.md
+++ b/projects/python/control/single_demo_grasp/README.md
@@ -26,7 +26,7 @@ $ make install_runtime_dependencies
After installing dependencies, the user must source the workspace in the shell in order to detect the packages:
```
-$ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+$ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
```
## Demos
@@ -38,7 +38,7 @@ three different nodes must be launched consecutively in order to properly run th
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
4. $ export WEBOTS_HOME=/usr/local/webots
5. $ roslaunch single_demo_grasping_demo panda_sim.launch
```
@@ -47,7 +47,7 @@ three different nodes must be launched consecutively in order to properly run th
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
4. $ roslaunch single_demo_grasping_demo camera_stream_inference.launch
```
@@ -55,20 +55,20 @@ three different nodes must be launched consecutively in order to properly run th
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
4. $ roslaunch single_demo_grasping_demo panda_sim_control.launch
```
## Examples
You can find an example on how to use the learner class to run inference and see the result in the following directory:
```
-$ cd projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
+$ cd projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
```
simply run:
```
1. $ cd path/to/opendr/home # change accordingly
2. $ source bin/setup.bash
-3. $ source projects/control/single_demo_grasp/simulation_ws/devel/setup.bash
-4. $ cd projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
+3. $ source projects/python/control/single_demo_grasp/simulation_ws/devel/setup.bash
+4. $ cd projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/
5. $ ./single_demo_inference.py
```
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/CMakeLists.txt b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/CMakeLists.txt
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/CMakeLists.txt
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/CMakeLists.txt
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/mainpage.dox b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/mainpage.dox
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/mainpage.dox
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/mainpage.dox
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/finger.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/finger.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/finger.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/finger.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/hand.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/hand.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/hand.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/hand.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link0.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link0.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link0.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link0.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link1.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link1.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link1.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link1.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link2.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link2.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link2.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link2.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link3.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link3.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link3.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link3.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link4.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link4.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link4.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link4.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link5.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link5.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link5.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link5.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link6.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link6.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link6.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link6.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link7.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link7.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link7.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/meshes/visual/link7.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/package.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/package.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/package.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/package.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/dual_panda_example.urdf.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/dual_panda_example.urdf.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/dual_panda_example.urdf.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/dual_panda_example.urdf.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.urdf.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.urdf.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.urdf.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.urdf.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/hand.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.urdf.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.urdf.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.urdf.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.urdf.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm_hand.urdf.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm_hand.urdf.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm_hand.urdf.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/robots/panda_arm_hand.urdf.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/franka_description/rosdoc.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/rosdoc.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/franka_description/rosdoc.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/franka_description/rosdoc.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/.setup_assistant b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/.setup_assistant
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/.setup_assistant
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/.setup_assistant
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CHANGELOG.rst b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CHANGELOG.rst
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CHANGELOG.rst
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CHANGELOG.rst
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CMakeLists.txt b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CMakeLists.txt
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CMakeLists.txt
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/CMakeLists.txt
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/README.md b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/README.md
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/README.md
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/README.md
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/chomp_planning.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/chomp_planning.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/chomp_planning.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/chomp_planning.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/fake_controllers.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/fake_controllers.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/fake_controllers.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/fake_controllers.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/hand.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/hand.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/hand.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/hand.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/joint_limits.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/joint_limits.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/joint_limits.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/joint_limits.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/kinematics.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/kinematics.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/kinematics.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/kinematics.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/lerp_planning.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/lerp_planning.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/lerp_planning.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/lerp_planning.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/ompl_planning.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/ompl_planning.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/ompl_planning.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/ompl_planning.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.srdf.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.srdf.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.srdf.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.srdf.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm_hand.srdf.xacro b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm_hand.srdf.xacro
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm_hand.srdf.xacro
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_arm_hand.srdf.xacro
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_controllers.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_controllers.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_controllers.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_controllers.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_gripper_controllers.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_gripper_controllers.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_gripper_controllers.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/panda_gripper_controllers.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_depthmap.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_depthmap.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_depthmap.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_depthmap.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_pointcloud.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_pointcloud.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_pointcloud.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/sensors_kinect_pointcloud.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/stomp_planning.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/stomp_planning.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/stomp_planning.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/stomp_planning.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/trajopt_planning.yaml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/trajopt_planning.yaml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/trajopt_planning.yaml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/config/trajopt_planning.yaml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/chomp_planning_pipeline.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/chomp_planning_pipeline.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/chomp_planning_pipeline.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/chomp_planning_pipeline.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/default_warehouse_db.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/default_warehouse_db.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/default_warehouse_db.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/default_warehouse_db.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo_chomp.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo_chomp.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo_chomp.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/demo_chomp.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/fake_moveit_controller_manager.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/fake_moveit_controller_manager.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/fake_moveit_controller_manager.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/fake_moveit_controller_manager.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/joystick_control.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/joystick_control.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/joystick_control.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/joystick_control.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/lerp_planning_pipeline.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/lerp_planning_pipeline.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/lerp_planning_pipeline.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/lerp_planning_pipeline.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/move_group.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/move_group.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/move_group.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/move_group.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit.rviz b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit.rviz
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit.rviz
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit.rviz
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_empty.rviz b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_empty.rviz
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_empty.rviz
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_empty.rviz
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_rviz.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_rviz.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_rviz.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/moveit_rviz.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl-chomp_planning_pipeline.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl-chomp_planning_pipeline.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl-chomp_planning_pipeline.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl-chomp_planning_pipeline.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl_planning_pipeline.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl_planning_pipeline.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl_planning_pipeline.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/ompl_planning_pipeline.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_control_moveit_rviz.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_control_moveit_rviz.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_control_moveit_rviz.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_control_moveit_rviz.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_gripper_moveit_controller_manager.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_gripper_moveit_controller_manager.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_gripper_moveit_controller_manager.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_gripper_moveit_controller_manager.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_controller_manager.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_controller_manager.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_controller_manager.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_controller_manager.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_sensor_manager.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_sensor_manager.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_sensor_manager.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/panda_moveit_sensor_manager.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_context.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_context.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_context.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_context.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_pipeline.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_pipeline.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_pipeline.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/planning_pipeline.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_ompl.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_ompl.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_ompl.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_ompl.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_trajopt.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_trajopt.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_trajopt.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/run_benchmark_trajopt.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/sensor_manager.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/sensor_manager.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/sensor_manager.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/sensor_manager.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/setup_assistant.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/setup_assistant.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/setup_assistant.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/setup_assistant.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/stomp_planning_pipeline.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/stomp_planning_pipeline.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/stomp_planning_pipeline.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/stomp_planning_pipeline.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajectory_execution.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajectory_execution.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajectory_execution.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajectory_execution.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajopt_planning_pipeline.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajopt_planning_pipeline.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajopt_planning_pipeline.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/trajopt_planning_pipeline.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse_settings.launch.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse_settings.launch.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse_settings.launch.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/launch/warehouse_settings.launch.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/package.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/package.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/package.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/panda_moveit_config/package.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/CMakeLists.txt b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/CMakeLists.txt
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/CMakeLists.txt
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/CMakeLists.txt
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/README.md b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/README.md
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/README.md
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/README.md
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/inference_utils.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/inference_utils.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/inference_utils.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/inference_utils.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/samples/0.jpg b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/samples/0.jpg
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/samples/0.jpg
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/samples/0.jpg
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_grasp_camera_stream.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_grasp_camera_stream.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_grasp_camera_stream.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_grasp_camera_stream.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_inference.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_inference.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_inference.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/inference/single_demo_inference.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/camera_stream_inference.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/camera_stream_inference.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/camera_stream_inference.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/camera_stream_inference.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_controller.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_controller.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_controller.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_controller.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim_control.launch b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim_control.launch
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim_control.launch
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/launch/panda_sim_control.launch
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/cran_feld_pendulum.stl b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/cran_feld_pendulum.stl
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/cran_feld_pendulum.stl
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/cran_feld_pendulum.stl
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/d435.dae b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/d435.dae
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/d435.dae
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/objects/d435.dae
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/package.xml b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/package.xml
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/package.xml
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/package.xml
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/BallBearing.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/BallBearing.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/BallBearing.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/BallBearing.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CommonLine.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CommonLine.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CommonLine.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CommonLine.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CranfieldFace.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CranfieldFace.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CranfieldFace.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CranfieldFace.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CylinderPneumatic.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CylinderPneumatic.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CylinderPneumatic.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/CylinderPneumatic.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/FuelLine.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/FuelLine.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/FuelLine.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/FuelLine.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Housing.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Housing.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Housing.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Housing.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Pendulum.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Pendulum.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Pendulum.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/Pendulum.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/RodEnd.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/RodEnd.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/RodEnd.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/RodEnd.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/panda_arm_hand.proto b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/panda_arm_hand.proto
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/panda_arm_hand.proto
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/protos/panda_arm_hand.proto
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/camera_publisher.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/camera_publisher.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/camera_publisher.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/camera_publisher.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/constants.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/constants.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/constants.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/constants.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/gripper_command.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/gripper_command.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/gripper_command.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/gripper_command.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/joint_state_publisher.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/joint_state_publisher.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/joint_state_publisher.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/joint_state_publisher.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/panda_ros.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/panda_ros.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/panda_ros.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/panda_ros.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/single_demo_grasp_action.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/single_demo_grasp_action.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/single_demo_grasp_action.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/single_demo_grasp_action.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/trajectory_follower.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/trajectory_follower.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/trajectory_follower.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/trajectory_follower.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/utilities.py b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/utilities.py
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/utilities.py
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/scripts/utilities.py
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/.franka_simulation.wbproj b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/.franka_simulation.wbproj
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/.franka_simulation.wbproj
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/.franka_simulation.wbproj
diff --git a/projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/franka_simulation.wbt b/projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/franka_simulation.wbt
similarity index 100%
rename from projects/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/franka_simulation.wbt
rename to projects/python/control/single_demo_grasp/simulation_ws/src/single_demo_grasping_demo/worlds/franka_simulation.wbt
diff --git a/projects/perception/.gitignore b/projects/python/perception/.gitignore
similarity index 100%
rename from projects/perception/.gitignore
rename to projects/python/perception/.gitignore
diff --git a/projects/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/src/fmp_slam_eval/__init__.py b/projects/python/perception/__init__.py
similarity index 100%
rename from projects/perception/slam/full_map_posterior_gmapping/src/fmp_slam_eval/src/fmp_slam_eval/__init__.py
rename to projects/python/perception/__init__.py
diff --git a/projects/perception/activity_recognition/benchmark/README.md b/projects/python/perception/activity_recognition/benchmark/README.md
similarity index 76%
rename from projects/perception/activity_recognition/benchmark/README.md
rename to projects/python/perception/activity_recognition/benchmark/README.md
index 8e8dcef68e..29e38ecf76 100644
--- a/projects/perception/activity_recognition/benchmark/README.md
+++ b/projects/python/perception/activity_recognition/benchmark/README.md
@@ -25,4 +25,10 @@ X3D
CoX3D
```bash
./benchmark_cox3d.py
-```
\ No newline at end of file
+```
+
+CoTransEnc
+```bash
+./benchmark_cotransenc.py
+```
+NB: The CoTransEnc module benchmarks various configurations of the Continual Transformer Encoder modules only. This doesn't include any feature-extraction that you might want to use beforehand.
\ No newline at end of file
diff --git a/projects/python/perception/activity_recognition/benchmark/benchmark_cotransenc.py b/projects/python/perception/activity_recognition/benchmark/benchmark_cotransenc.py
new file mode 100644
index 0000000000..f5957fd021
--- /dev/null
+++ b/projects/python/perception/activity_recognition/benchmark/benchmark_cotransenc.py
@@ -0,0 +1,89 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import torch
+import yaml
+from opendr.perception.activity_recognition import CoTransEncLearner
+
+from pytorch_benchmark import benchmark
+import logging
+from typing import List, Union
+from opendr.engine.target import Category
+from opendr.engine.data import Image
+
+logger = logging.getLogger("benchmark")
+logging.basicConfig()
+logger.setLevel("DEBUG")
+
+
+def benchmark_cotransenc():
+ temp_dir = "./projects/python/perception/activity_recognition/benchmark/tmp"
+ num_runs = 100
+ batch_size = 1
+
+ for num_layers in [1, 2]: # --------- A few plausible hparams ----------
+ for (input_dims, sequence_len) in [(1024, 32), (2048, 64), (4096, 64)]:
+ print(
+ f"==== Benchmarking CoTransEncLearner (l{num_layers}-d{input_dims}-t{sequence_len}) ===="
+ )
+ learner = CoTransEncLearner(
+ device="cuda" if torch.cuda.is_available() else "cpu",
+ temp_path=temp_dir + f"/{num_layers}_{input_dims}_{sequence_len}",
+ num_layers=num_layers,
+ input_dims=input_dims,
+ hidden_dims=input_dims // 2,
+ sequence_len=sequence_len,
+ num_heads=input_dims // 128,
+ batch_size=batch_size,
+ )
+ learner.optimize()
+
+ sample = torch.randn(1, input_dims)
+
+ # Warm-up continual inference not needed for optimized version:
+ # for _ in range(sequence_len - 1):
+ # learner.infer(sample)
+
+ def get_device_fn(*args):
+ nonlocal learner
+ return next(learner.model.parameters()).device
+
+ def transfer_to_device_fn(
+ sample: Union[torch.Tensor, List[Category], List[Image]],
+ device: torch.device,
+ ):
+ if isinstance(sample, torch.Tensor):
+ return sample.to(device=device)
+
+ assert isinstance(sample, Category)
+ return Category(
+ prediction=sample.data,
+ confidence=sample.confidence.to(device=device),
+ )
+
+ results1 = benchmark(
+ model=learner.infer,
+ sample=sample,
+ num_runs=num_runs,
+ get_device_fn=get_device_fn,
+ transfer_to_device_fn=transfer_to_device_fn,
+ batch_size=batch_size,
+ print_fn=print,
+ )
+ print(yaml.dump({"learner.infer": results1}))
+
+
+if __name__ == "__main__":
+ benchmark_cotransenc()
diff --git a/projects/perception/activity_recognition/benchmark/benchmark_cox3d.py b/projects/python/perception/activity_recognition/benchmark/benchmark_cox3d.py
similarity index 84%
rename from projects/perception/activity_recognition/benchmark/benchmark_cox3d.py
rename to projects/python/perception/activity_recognition/benchmark/benchmark_cox3d.py
index fb63294bac..a9ffa468a4 100644
--- a/projects/perception/activity_recognition/benchmark/benchmark_cox3d.py
+++ b/projects/python/perception/activity_recognition/benchmark/benchmark_cox3d.py
@@ -29,7 +29,7 @@
def benchmark_cox3d():
- temp_dir = "./projects/perception/activity_recognition/benchmark/tmp"
+ temp_dir = "./projects/python/perception/activity_recognition/benchmark/tmp"
num_runs = 100
@@ -75,12 +75,13 @@ def benchmark_cox3d():
temp_path=temp_dir,
backbone=backbone,
)
+ learner.optimize()
sample = torch.randn(
batch_size[backbone], *input_shape[backbone]
- ) # (B, C, T, H, W)
- image_samples = [Image(v) for v in sample]
- image_sample = [Image(sample[0])]
+ ) # (B, C, H, W)
+ # image_samples = [Image(v) for v in sample]
+ # image_sample = [Image(sample[0])]
def get_device_fn(*args):
nonlocal learner
@@ -101,15 +102,18 @@ def transfer_to_device_fn(
assert isinstance(sample[0], Category)
return [
- Category(prediction=s.data, confidence=s.confidence.to(device=device),)
+ Category(
+ prediction=s.data,
+ confidence=s.confidence.to(device=device),
+ )
for s in sample
]
print("== Benchmarking learner.infer ==")
results1 = benchmark(
model=learner.infer,
- sample=image_samples,
- sample_with_batch_size1=image_sample,
+ sample=sample,
+ # sample_with_batch_size1=image_sample,
num_runs=num_runs,
get_device_fn=get_device_fn,
transfer_to_device_fn=transfer_to_device_fn,
@@ -118,10 +122,6 @@ def transfer_to_device_fn(
)
print(yaml.dump({"learner.infer": results1}))
- print("== Benchmarking model directly ==")
- results2 = benchmark(learner.model, sample, num_runs=num_runs, print_fn=print)
- print(yaml.dump({"learner.model.forward": results2}))
-
if __name__ == "__main__":
benchmark_cox3d()
diff --git a/projects/perception/activity_recognition/benchmark/benchmark_x3d.py b/projects/python/perception/activity_recognition/benchmark/benchmark_x3d.py
similarity index 85%
rename from projects/perception/activity_recognition/benchmark/benchmark_x3d.py
rename to projects/python/perception/activity_recognition/benchmark/benchmark_x3d.py
index 5256cf308d..d60b5cc8f6 100644
--- a/projects/perception/activity_recognition/benchmark/benchmark_x3d.py
+++ b/projects/python/perception/activity_recognition/benchmark/benchmark_x3d.py
@@ -29,7 +29,7 @@
def benchmark_x3d():
- temp_dir = "./projects/perception/activity_recognition/benchmark/tmp"
+ temp_dir = "./projects/python/perception/activity_recognition/benchmark/tmp"
num_runs = 100
@@ -74,14 +74,16 @@ def benchmark_x3d():
device="cuda" if torch.cuda.is_available() else "cpu",
temp_path=temp_dir,
backbone=backbone,
+ batch_size=batch_size[backbone],
)
+ learner.optimize()
learner.model.eval()
sample = torch.randn(
batch_size[backbone], *input_shape[backbone]
) # (B, C, T, H, W)
- video_samples = [Video(v) for v in sample]
- video_sample = [Video(sample[0])]
+ # video_samples = [Video(v) for v in sample]
+ # video_sample = [Video(sample[0])]
def get_device_fn(*args):
nonlocal learner
@@ -102,15 +104,18 @@ def transfer_to_device_fn(
assert isinstance(sample[0], Category)
return [
- Category(prediction=s.data, confidence=s.confidence.to(device=device),)
+ Category(
+ prediction=s.data,
+ confidence=s.confidence.to(device=device),
+ )
for s in sample
]
print("== Benchmarking learner.infer ==")
results1 = benchmark(
model=learner.infer,
- sample=video_samples,
- sample_with_batch_size1=video_sample,
+ sample=sample,
+ # sample_with_batch_size1=sample[0].unsqueeze(0),
num_runs=num_runs,
get_device_fn=get_device_fn,
transfer_to_device_fn=transfer_to_device_fn,
@@ -119,10 +124,6 @@ def transfer_to_device_fn(
)
print(yaml.dump({"learner.infer": results1}))
- print("== Benchmarking model directly ==")
- results2 = benchmark(learner.model, sample, num_runs=num_runs, print_fn=print)
- print(yaml.dump({"learner.model.forward": results2}))
-
if __name__ == "__main__":
benchmark_x3d()
diff --git a/projects/perception/activity_recognition/benchmark/install_on_server.sh b/projects/python/perception/activity_recognition/benchmark/install_on_server.sh
similarity index 100%
rename from projects/perception/activity_recognition/benchmark/install_on_server.sh
rename to projects/python/perception/activity_recognition/benchmark/install_on_server.sh
diff --git a/projects/perception/activity_recognition/benchmark/requirements.txt b/projects/python/perception/activity_recognition/benchmark/requirements.txt
similarity index 100%
rename from projects/perception/activity_recognition/benchmark/requirements.txt
rename to projects/python/perception/activity_recognition/benchmark/requirements.txt
diff --git a/projects/python/perception/activity_recognition/demos/continual_transformer_encoder/README.md b/projects/python/perception/activity_recognition/demos/continual_transformer_encoder/README.md
new file mode 100644
index 0000000000..e804ca345b
--- /dev/null
+++ b/projects/python/perception/activity_recognition/demos/continual_transformer_encoder/README.md
@@ -0,0 +1,13 @@
+# Continual Transformer Encoder demo
+
+The file [demo.py](demo.py) is a demo of how to use the `CoTransEncLearner`, including fitting, evaluation, runtime optimization and inference.
+
+To fit, evaluate and perform inference, use the following command:
+```bash
+python demo.py --fit --eval --infer
+```
+
+Please use the "--help" command see further script options:
+```bash
+python demo.py --help
+```
diff --git a/projects/python/perception/activity_recognition/demos/continual_transformer_encoder/demo.py b/projects/python/perception/activity_recognition/demos/continual_transformer_encoder/demo.py
new file mode 100644
index 0000000000..b575adc2bf
--- /dev/null
+++ b/projects/python/perception/activity_recognition/demos/continual_transformer_encoder/demo.py
@@ -0,0 +1,88 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import torch
+
+from opendr.perception.activity_recognition import CoTransEncLearner
+from opendr.perception.activity_recognition.datasets import DummyTimeseriesDataset
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--fit", help="Fit the model", default=False, action="store_true")
+ parser.add_argument("--num_fit_steps", help="Numer of steps to fit the model", type=int, default=10)
+ parser.add_argument("--eval", help="Evaluate the model", default=False, action="store_true")
+ parser.add_argument("--optimize", help="Perform inference using the model", default=False, action="store_true")
+ parser.add_argument("--infer", help="Perform inference using the model", default=False, action="store_true")
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cpu")
+ parser.add_argument("--input_dims", help="Input dimensionality of the model and dataset", type=float, default=8)
+ parser.add_argument("--hidden_dims", help="The number of hidden dimensions of the model", type=float, default=32)
+ parser.add_argument("--sequence_len", help="The length of the time-series to consider", type=int, default=64)
+ parser.add_argument("--num_heads", help="Number of attention heads to employ", type=int, default=8)
+ parser.add_argument("--batch_size", help="The batch size of the model", type=int, default=2)
+ return parser.parse_args()
+
+
+def main(args):
+ # Define learner
+ learner = CoTransEncLearner(
+ batch_size=args.batch_size,
+ device="cpu",
+ input_dims=args.input_dims,
+ hidden_dims=args.hidden_dims,
+ sequence_len=args.sequence_len,
+ num_heads=args.num_heads,
+ num_classes=4,
+ )
+
+ # Define datasets
+ train_ds = DummyTimeseriesDataset(
+ sequence_len=args.sequence_len,
+ num_sines=args.input_dims,
+ num_datapoints=args.sequence_len * 2,
+ )
+ val_ds = DummyTimeseriesDataset(
+ sequence_len=args.sequence_len,
+ num_sines=args.input_dims,
+ num_datapoints=args.sequence_len * 2,
+ base_offset=args.sequence_len * 2,
+ )
+ test_ds = DummyTimeseriesDataset(
+ sequence_len=args.sequence_len,
+ num_sines=args.input_dims,
+ num_datapoints=args.sequence_len * 2,
+ base_offset=args.sequence_len * 4,
+ )
+
+ # Invoke operations
+ if args.fit:
+ learner.fit(dataset=train_ds, val_dataset=val_ds, steps=args.num_fit_steps)
+
+ if args.eval:
+ results = learner.eval(test_ds)
+ print("Evaluation results: ", results)
+
+ if args.optimize:
+ learner.optimize()
+
+ if args.infer:
+ dl = torch.utils.data.DataLoader(val_ds, batch_size=args.batch_size, num_workers=0)
+ tensor = next(iter(dl))[0][0]
+ category = learner.infer(tensor)
+ print(f"Inferred category.data = {category.data}, category.confidence = {category.confidence.detach().numpy()}")
+
+
+if __name__ == "__main__":
+ main(parse_args())
diff --git a/projects/perception/activity_recognition/demos/online_recognition/README.md b/projects/python/perception/activity_recognition/demos/online_recognition/README.md
similarity index 100%
rename from projects/perception/activity_recognition/demos/online_recognition/README.md
rename to projects/python/perception/activity_recognition/demos/online_recognition/README.md
diff --git a/projects/perception/slam/full_map_posterior_gmapping/src/map_simulator/src/__init__.py b/projects/python/perception/activity_recognition/demos/online_recognition/activity_recognition/__init__.py
similarity index 100%
rename from projects/perception/slam/full_map_posterior_gmapping/src/map_simulator/src/__init__.py
rename to projects/python/perception/activity_recognition/demos/online_recognition/activity_recognition/__init__.py
diff --git a/projects/perception/activity_recognition/demos/online_recognition/activity_recognition/screenshot.png b/projects/python/perception/activity_recognition/demos/online_recognition/activity_recognition/screenshot.png
similarity index 100%
rename from projects/perception/activity_recognition/demos/online_recognition/activity_recognition/screenshot.png
rename to projects/python/perception/activity_recognition/demos/online_recognition/activity_recognition/screenshot.png
diff --git a/projects/perception/activity_recognition/demos/online_recognition/activity_recognition/video.gif b/projects/python/perception/activity_recognition/demos/online_recognition/activity_recognition/video.gif
similarity index 100%
rename from projects/perception/activity_recognition/demos/online_recognition/activity_recognition/video.gif
rename to projects/python/perception/activity_recognition/demos/online_recognition/activity_recognition/video.gif
diff --git a/projects/perception/activity_recognition/demos/online_recognition/demo.py b/projects/python/perception/activity_recognition/demos/online_recognition/demo.py
similarity index 99%
rename from projects/perception/activity_recognition/demos/online_recognition/demo.py
rename to projects/python/perception/activity_recognition/demos/online_recognition/demo.py
index 5bfd19d9ed..62cbbe364f 100644
--- a/projects/perception/activity_recognition/demos/online_recognition/demo.py
+++ b/projects/python/perception/activity_recognition/demos/online_recognition/demo.py
@@ -52,12 +52,12 @@ def index():
def runnig_fps(alpha=0.1):
- t0 = time.time_ns()
+ t0 = time.perf_counter()
fps_avg = 10
def wrapped():
nonlocal t0, alpha, fps_avg
- t1 = time.time_ns()
+ t1 = time.perf_counter()
delta = (t1 - t0) * 1e-9
t0 = t1
fps_avg = alpha * (1 / delta) + (1 - alpha) * fps_avg
diff --git a/projects/perception/activity_recognition/demos/online_recognition/requirements.txt b/projects/python/perception/activity_recognition/demos/online_recognition/requirements.txt
similarity index 100%
rename from projects/perception/activity_recognition/demos/online_recognition/requirements.txt
rename to projects/python/perception/activity_recognition/demos/online_recognition/requirements.txt
diff --git a/projects/perception/activity_recognition/demos/online_recognition/setup.py b/projects/python/perception/activity_recognition/demos/online_recognition/setup.py
similarity index 100%
rename from projects/perception/activity_recognition/demos/online_recognition/setup.py
rename to projects/python/perception/activity_recognition/demos/online_recognition/setup.py
diff --git a/projects/perception/activity_recognition/demos/online_recognition/templates/index.html b/projects/python/perception/activity_recognition/demos/online_recognition/templates/index.html
similarity index 100%
rename from projects/perception/activity_recognition/demos/online_recognition/templates/index.html
rename to projects/python/perception/activity_recognition/demos/online_recognition/templates/index.html
diff --git a/projects/perception/face_recognition/README.md b/projects/python/perception/face_recognition/README.md
similarity index 100%
rename from projects/perception/face_recognition/README.md
rename to projects/python/perception/face_recognition/README.md
diff --git a/projects/perception/face_recognition/demos/benchmarking_demo.py b/projects/python/perception/face_recognition/demos/benchmarking_demo.py
similarity index 100%
rename from projects/perception/face_recognition/demos/benchmarking_demo.py
rename to projects/python/perception/face_recognition/demos/benchmarking_demo.py
diff --git a/projects/perception/face_recognition/demos/eval_demo.py b/projects/python/perception/face_recognition/demos/eval_demo.py
similarity index 100%
rename from projects/perception/face_recognition/demos/eval_demo.py
rename to projects/python/perception/face_recognition/demos/eval_demo.py
diff --git a/projects/perception/face_recognition/demos/inference_demo.py b/projects/python/perception/face_recognition/demos/inference_demo.py
similarity index 100%
rename from projects/perception/face_recognition/demos/inference_demo.py
rename to projects/python/perception/face_recognition/demos/inference_demo.py
diff --git a/projects/perception/face_recognition/demos/inference_tutorial.ipynb b/projects/python/perception/face_recognition/demos/inference_tutorial.ipynb
similarity index 100%
rename from projects/perception/face_recognition/demos/inference_tutorial.ipynb
rename to projects/python/perception/face_recognition/demos/inference_tutorial.ipynb
diff --git a/projects/perception/face_recognition/demos/webcam_demo.py b/projects/python/perception/face_recognition/demos/webcam_demo.py
similarity index 100%
rename from projects/perception/face_recognition/demos/webcam_demo.py
rename to projects/python/perception/face_recognition/demos/webcam_demo.py
diff --git a/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/README.md b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/README.md
new file mode 100644
index 0000000000..8bd720511e
--- /dev/null
+++ b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/README.md
@@ -0,0 +1,77 @@
+# Image-based Facial Expression Recognition Demo
+
+This folder contains an implemented demo of image_based_facial_expression_recognition method implemented by [[1]](#1).
+The demo framework has three main features:
+- Image: recognizes facial expressions in images.
+- Video: recognizes facial expressions in videos in a frame-based approach.
+- Webcam: connects to a webcam and recognizes facial expressions of the closest face detected by a face detection algorithm.
+The demo utilizes OpenCV face detector Haar Cascade [[2]](https://ieeexplore.ieee.org/abstract/document/990517) for real-time face detection.
+
+#### Running demo
+The pretrained models on AffectNet Categorical dataset are provided by [[1]](#1) which can be found [here](https://github.com/siqueira-hc/Efficient-Facial-Feature-Learning-with-Wide-Ensemble-based-Convolutional-Neural-Networks/tree/master/model/ml/trained_models/esr_9).
+**Please note that the pretrained weights cannot be used for commercial purposes**
+To recognize a facial expression in images, run the following command:
+```python
+python inference_demo.py image -i ./media/jackie.jpg -d
+```
+
+The argument `image` indicates that the input is an image. The location of the image is specified after `-i` and `-d` sets the display mode to true.
+If the location of image file is not specified, the demo automatically downloads a sample image file from the FTP server.
+
+```python
+python inference_demo.py image -i 'image_path' -d
+```
+
+To recognize a facial expression in videos, run the following command:
+```python
+python inference_demo.py video -i 'video_path' -d -f 5
+```
+The argument `video` indicates that the input is a video. The location of the video is specified after `-i`. `-d` sets the display mode to true, `-f` defines the number of frames to be processed.
+If the location of video file is not specified, the demo automatically downloads a sample video file from the FTP server.
+
+To recognize a facial expression in images captured from a webcam, run the following command:
+```python
+python inference_demo.py webcam -d
+```
+The argument `webcam` indicates the framework to capture images from a webcam. `-d` sets the display mode to true.
+
+#### List of Arguments
+Positional arguments:
+
+- **mode**:\
+Select the running mode of the demo which are 'image', 'video' or 'webcam'.
+Input values: {image, video, webcam}.
+
+Optional arguments:
+
+- **-h (--help)**:\
+Display the help message.
+
+- **-d (--display)**:\
+Display a window with the input data on the left and the output data on the right (i.e., detected face, emotions, and affect values).
+
+- **-i (--input)**:\
+Define the full path to an image or video.
+
+- **-c (--device)**:\
+Specifies the device, which can be 'cuda' or 'cpu'.
+
+- **-w (--webcam)**:\
+Define the webcam to be used while the framework is running by 'id' when the webcam mode is selected. The default camera is used, if 'id' is not specified.
+
+- **-f (--frames)**:\
+Set the number of frames to be processed for each 30 frames. The lower is the number, the faster is the process.
+
+
+## Acknowledgement
+This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.
+
+
+## References
+[1]
+[Siqueira, Henrique, Sven Magg, and Stefan Wermter. "Efficient facial feature learning with wide ensemble-based convolutional neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.](
+https://ojs.aaai.org/index.php/AAAI/article/view/6037)
+
+[2]
+[Viola, Paul, and Michael Jones. "Rapid object detection using a boosted cascade of simple features." Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001. Vol. 1. Ieee, 2001](
+https://ieeexplore.ieee.org/abstract/document/990517)
diff --git a/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/benchmark_esr.py b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/benchmark_esr.py
new file mode 100644
index 0000000000..2e5910a794
--- /dev/null
+++ b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/benchmark_esr.py
@@ -0,0 +1,91 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import torch
+import yaml
+from pytorch_benchmark import benchmark
+import logging
+import argparse
+
+# opendr imports
+from opendr.perception.facial_expression_recognition import FacialEmotionLearner
+from opendr.engine.data import Image
+
+
+logger = logging.getLogger("benchmark")
+logging.basicConfig()
+logger.setLevel("DEBUG")
+
+
+def benchmark_esr(args):
+ results_dir = "./results"
+ if not os.path.exists(results_dir):
+ os.makedirs(results_dir)
+ device = args.device
+
+ print(f"==== Benchmarking {args.method} ====")
+
+ learner = FacialEmotionLearner(device=device, ensemble_size=args.ensemble_size, diversify=True)
+ learner.init_model(num_branches=args.ensemble_size)
+
+ if device == 'cuda':
+ learner.model.cuda()
+
+ num_runs = 100
+ batch_size = 32
+ C = 3
+ H = 96
+ W = 96
+ input_face = torch.randn(C, H, W)
+ input_img = Image(input_face)
+ input_batch = []
+ for i in range(batch_size):
+ input_batch.append(input_img)
+ if type(input_batch) is list:
+ input_batch = torch.stack([torch.tensor(v.data) for v in input_batch])
+
+ def get_device_fn(*args):
+ # nonlocal learner
+ return next(learner.model.parameters()).device
+
+ def transfer_to_device_fn(sample, device,):
+ return sample
+
+ print("== Benchmarking learner.infer ==")
+ results1 = benchmark(model=learner.infer,
+ sample=input_batch,
+ sample_with_batch_size1=None,
+ num_runs=num_runs,
+ get_device_fn=get_device_fn,
+ transfer_to_device_fn=transfer_to_device_fn,
+ batch_size=batch_size,
+ print_fn=print,
+ )
+ with open(results_dir + f"/benchmark_{args.method}_{device}.txt", "a") as f:
+ print("== Benchmarking learner.infer ==", file=f)
+ print(yaml.dump({"learner.infer": results1}), file=f)
+ print("\n\n", file=f)
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda")
+ parser.add_argument('--method', type=str, default='div_esr_9',
+ help='action detection method')
+ parser.add_argument('--ensemble_size', type=int, default=9,
+ help='number of ensemble branches')
+
+ args = parser.parse_args()
+ benchmark_esr(args)
diff --git a/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/face_detector/frontal_face.xml b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/face_detector/frontal_face.xml
new file mode 100644
index 0000000000..cbd1aa89e9
--- /dev/null
+++ b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/face_detector/frontal_face.xml
@@ -0,0 +1,33314 @@
+
+
+
+BOOST
+ HAAR
+ 24
+ 24
+
+ 211
+
+ 0
+ 25
+
+ <_>
+ 9
+ -5.0425500869750977e+00
+
+ <_>
+
+ 0 -1 0 -3.1511999666690826e-02
+
+ 2.0875380039215088e+00 -2.2172100543975830e+00
+ <_>
+
+ 0 -1 1 1.2396000325679779e-02
+
+ -1.8633940219879150e+00 1.3272049427032471e+00
+ <_>
+
+ 0 -1 2 2.1927999332547188e-02
+
+ -1.5105249881744385e+00 1.0625729560852051e+00
+ <_>
+
+ 0 -1 3 5.7529998011887074e-03
+
+ -8.7463897466659546e-01 1.1760339736938477e+00
+ <_>
+
+ 0 -1 4 1.5014000236988068e-02
+
+ -7.7945697307586670e-01 1.2608419656753540e+00
+ <_>
+
+ 0 -1 5 9.9371001124382019e-02
+
+ 5.5751299858093262e-01 -1.8743000030517578e+00
+ <_>
+
+ 0 -1 6 2.7340000960975885e-03
+
+ -1.6911929845809937e+00 4.4009700417518616e-01
+ <_>
+
+ 0 -1 7 -1.8859000876545906e-02
+
+ -1.4769539833068848e+00 4.4350099563598633e-01
+ <_>
+
+ 0 -1 8 5.9739998541772366e-03
+
+ -8.5909199714660645e-01 8.5255599021911621e-01
+ <_>
+ 16
+ -4.9842400550842285e+00
+
+ <_>
+
+ 0 -1 9 -2.1110000088810921e-02
+
+ 1.2435649633407593e+00 -1.5713009834289551e+00
+ <_>
+
+ 0 -1 10 2.0355999469757080e-02
+
+ -1.6204780340194702e+00 1.1817760467529297e+00
+ <_>
+
+ 0 -1 11 2.1308999508619308e-02
+
+ -1.9415930509567261e+00 7.0069098472595215e-01
+ <_>
+
+ 0 -1 12 9.1660000383853912e-02
+
+ -5.5670100450515747e-01 1.7284419536590576e+00
+ <_>
+
+ 0 -1 13 3.6288000643253326e-02
+
+ 2.6763799786567688e-01 -2.1831810474395752e+00
+ <_>
+
+ 0 -1 14 -1.9109999760985374e-02
+
+ -2.6730210781097412e+00 4.5670801401138306e-01
+ <_>
+
+ 0 -1 15 8.2539999857544899e-03
+
+ -1.0852910280227661e+00 5.3564202785491943e-01
+ <_>
+
+ 0 -1 16 1.8355000764131546e-02
+
+ -3.5200199484825134e-01 9.3339198827743530e-01
+ <_>
+
+ 0 -1 17 -7.0569999516010284e-03
+
+ 9.2782098054885864e-01 -6.6349899768829346e-01
+ <_>
+
+ 0 -1 18 -9.8770000040531158e-03
+
+ 1.1577470302581787e+00 -2.9774799942970276e-01
+ <_>
+
+ 0 -1 19 1.5814000740647316e-02
+
+ -4.1960600018501282e-01 1.3576040267944336e+00
+ <_>
+
+ 0 -1 20 -2.0700000226497650e-02
+
+ 1.4590020179748535e+00 -1.9739399850368500e-01
+ <_>
+
+ 0 -1 21 -1.3760800659656525e-01
+
+ 1.1186759471893311e+00 -5.2915501594543457e-01
+ <_>
+
+ 0 -1 22 1.4318999834358692e-02
+
+ -3.5127198696136475e-01 1.1440860033035278e+00
+ <_>
+
+ 0 -1 23 1.0253000073134899e-02
+
+ -6.0850602388381958e-01 7.7098500728607178e-01
+ <_>
+
+ 0 -1 24 9.1508001089096069e-02
+
+ 3.8817799091339111e-01 -1.5122940540313721e+00
+ <_>
+ 27
+ -4.6551899909973145e+00
+
+ <_>
+
+ 0 -1 25 6.9747000932693481e-02
+
+ -1.0130879878997803e+00 1.4687349796295166e+00
+ <_>
+
+ 0 -1 26 3.1502999365329742e-02
+
+ -1.6463639736175537e+00 1.0000629425048828e+00
+ <_>
+
+ 0 -1 27 1.4260999858379364e-02
+
+ 4.6480301022529602e-01 -1.5959889888763428e+00
+ <_>
+
+ 0 -1 28 1.4453000389039516e-02
+
+ -6.5511900186538696e-01 8.3021801710128784e-01
+ <_>
+
+ 0 -1 29 -3.0509999487549067e-03
+
+ -1.3982310295104980e+00 4.2550599575042725e-01
+ <_>
+
+ 0 -1 30 3.2722998410463333e-02
+
+ -5.0702601671218872e-01 1.0526109933853149e+00
+ <_>
+
+ 0 -1 31 -7.2960001416504383e-03
+
+ 3.6356899142265320e-01 -1.3464889526367188e+00
+ <_>
+
+ 0 -1 32 5.0425000488758087e-02
+
+ -3.0461400747299194e-01 1.4504129886627197e+00
+ <_>
+
+ 0 -1 33 4.6879000961780548e-02
+
+ -4.0286201238632202e-01 1.2145609855651855e+00
+ <_>
+
+ 0 -1 34 -6.9358997046947479e-02
+
+ 1.0539360046386719e+00 -4.5719701051712036e-01
+ <_>
+
+ 0 -1 35 -4.9033999443054199e-02
+
+ -1.6253089904785156e+00 1.5378999710083008e-01
+ <_>
+
+ 0 -1 36 8.4827996790409088e-02
+
+ 2.8402999043464661e-01 -1.5662059783935547e+00
+ <_>
+
+ 0 -1 37 -1.7229999648407102e-03
+
+ -1.0147459506988525e+00 2.3294800519943237e-01
+ <_>
+
+ 0 -1 38 1.1562199890613556e-01
+
+ -1.6732899844646454e-01 1.2804069519042969e+00
+ <_>
+
+ 0 -1 39 -5.1279999315738678e-02
+
+ 1.5162390470504761e+00 -3.0271100997924805e-01
+ <_>
+
+ 0 -1 40 -4.2706999927759171e-02
+
+ 1.7631920576095581e+00 -5.1832001656293869e-02
+ <_>
+
+ 0 -1 41 3.7178099155426025e-01
+
+ -3.1389200687408447e-01 1.5357979536056519e+00
+ <_>
+
+ 0 -1 42 1.9412999972701073e-02
+
+ -1.0017599910497665e-01 9.3655401468276978e-01
+ <_>
+
+ 0 -1 43 1.7439000308513641e-02
+
+ -4.0379899740219116e-01 9.6293002367019653e-01
+ <_>
+
+ 0 -1 44 3.9638999849557877e-02
+
+ 1.7039099335670471e-01 -2.9602990150451660e+00
+ <_>
+
+ 0 -1 45 -9.1469995677471161e-03
+
+ 8.8786798715591431e-01 -4.3818700313568115e-01
+ <_>
+
+ 0 -1 46 1.7219999572262168e-03
+
+ -3.7218600511550903e-01 4.0018901228904724e-01
+ <_>
+
+ 0 -1 47 3.0231000855565071e-02
+
+ 6.5924003720283508e-02 -2.6469180583953857e+00
+ <_>
+
+ 0 -1 48 -7.8795999288558960e-02
+
+ -1.7491459846496582e+00 2.8475299477577209e-01
+ <_>
+
+ 0 -1 49 2.1110000088810921e-03
+
+ -9.3908101320266724e-01 2.3205199837684631e-01
+ <_>
+
+ 0 -1 50 2.7091000229120255e-02
+
+ -5.2664000540971756e-02 1.0756820440292358e+00
+ <_>
+
+ 0 -1 51 -4.4964998960494995e-02
+
+ -1.8294479846954346e+00 9.9561996757984161e-02
+ <_>
+ 32
+ -4.4531588554382324e+00
+
+ <_>
+
+ 0 -1 52 -6.5701000392436981e-02
+
+ 1.1558510065078735e+00 -1.0716359615325928e+00
+ <_>
+
+ 0 -1 53 1.5839999541640282e-02
+
+ -1.5634720325469971e+00 7.6877099275588989e-01
+ <_>
+
+ 0 -1 54 1.4570899307727814e-01
+
+ -5.7450097799301147e-01 1.3808720111846924e+00
+ <_>
+
+ 0 -1 55 6.1389999464154243e-03
+
+ -1.4570560455322266e+00 5.1610302925109863e-01
+ <_>
+
+ 0 -1 56 6.7179999314248562e-03
+
+ -8.3533602952957153e-01 5.8522200584411621e-01
+ <_>
+
+ 0 -1 57 1.8518000841140747e-02
+
+ -3.1312099099159241e-01 1.1696679592132568e+00
+ <_>
+
+ 0 -1 58 1.9958000630140305e-02
+
+ -4.3442600965499878e-01 9.5446902513504028e-01
+ <_>
+
+ 0 -1 59 -2.7755001187324524e-01
+
+ 1.4906179904937744e+00 -1.3815900683403015e-01
+ <_>
+
+ 0 -1 60 9.1859996318817139e-03
+
+ -9.6361500024795532e-01 2.7665498852729797e-01
+ <_>
+
+ 0 -1 61 -3.7737999111413956e-02
+
+ -2.4464108943939209e+00 2.3619599640369415e-01
+ <_>
+
+ 0 -1 62 1.8463000655174255e-02
+
+ 1.7539200186729431e-01 -1.3423130512237549e+00
+ <_>
+
+ 0 -1 63 -1.1114999651908875e-02
+
+ 4.8710799217224121e-01 -8.9851897954940796e-01
+ <_>
+
+ 0 -1 64 3.3927999436855316e-02
+
+ 1.7874200642108917e-01 -1.6342279911041260e+00
+ <_>
+
+ 0 -1 65 -3.5649001598358154e-02
+
+ -1.9607399702072144e+00 1.8102499842643738e-01
+ <_>
+
+ 0 -1 66 -1.1438000015914440e-02
+
+ 9.9010699987411499e-01 -3.8103199005126953e-01
+ <_>
+
+ 0 -1 67 -6.5236002206802368e-02
+
+ -2.5794160366058350e+00 2.4753600358963013e-01
+ <_>
+
+ 0 -1 68 -4.2272001504898071e-02
+
+ 1.4411840438842773e+00 -2.9508298635482788e-01
+ <_>
+
+ 0 -1 69 1.9219999667257071e-03
+
+ -4.9608600139617920e-01 6.3173598051071167e-01
+ <_>
+
+ 0 -1 70 -1.2921799719333649e-01
+
+ -2.3314270973205566e+00 5.4496999830007553e-02
+ <_>
+
+ 0 -1 71 2.2931000217795372e-02
+
+ -8.4447097778320312e-01 3.8738098740577698e-01
+ <_>
+
+ 0 -1 72 -3.4120000898838043e-02
+
+ -1.4431500434875488e+00 9.8422996699810028e-02
+ <_>
+
+ 0 -1 73 2.6223000138998032e-02
+
+ 1.8223099410533905e-01 -1.2586519718170166e+00
+ <_>
+
+ 0 -1 74 2.2236999124288559e-02
+
+ 6.9807998836040497e-02 -2.3820950984954834e+00
+ <_>
+
+ 0 -1 75 -5.8240001089870930e-03
+
+ 3.9332500100135803e-01 -2.7542799711227417e-01
+ <_>
+
+ 0 -1 76 4.3653000146150589e-02
+
+ 1.4832699298858643e-01 -1.1368780136108398e+00
+ <_>
+
+ 0 -1 77 5.7266999036073685e-02
+
+ 2.4628099799156189e-01 -1.2687400579452515e+00
+ <_>
+
+ 0 -1 78 2.3409998975694180e-03
+
+ -7.5448900461196899e-01 2.7163800597190857e-01
+ <_>
+
+ 0 -1 79 1.2996000237762928e-02
+
+ -3.6394900083541870e-01 7.0959198474884033e-01
+ <_>
+
+ 0 -1 80 -2.6517000049352646e-02
+
+ -2.3221859931945801e+00 3.5744000226259232e-02
+ <_>
+
+ 0 -1 81 -5.8400002308189869e-03
+
+ 4.2194300889968872e-01 -4.8184998333454132e-02
+ <_>
+
+ 0 -1 82 -1.6568999737501144e-02
+
+ 1.1099940538406372e+00 -3.4849700331687927e-01
+ <_>
+
+ 0 -1 83 -6.8157002329826355e-02
+
+ -3.3269989490509033e+00 2.1299000084400177e-01
+ <_>
+ 52
+ -4.3864588737487793e+00
+
+ <_>
+
+ 0 -1 84 3.9974000304937363e-02
+
+ -1.2173449993133545e+00 1.0826710462570190e+00
+ <_>
+
+ 0 -1 85 1.8819500505924225e-01
+
+ -4.8289400339126587e-01 1.4045250415802002e+00
+ <_>
+
+ 0 -1 86 7.8027002513408661e-02
+
+ -1.0782150030136108e+00 7.4040299654006958e-01
+ <_>
+
+ 0 -1 87 1.1899999663000926e-04
+
+ -1.2019979953765869e+00 3.7749201059341431e-01
+ <_>
+
+ 0 -1 88 8.5056997835636139e-02
+
+ -4.3939098715782166e-01 1.2647340297698975e+00
+ <_>
+
+ 0 -1 89 8.9720003306865692e-03
+
+ -1.8440499901771545e-01 4.5726400613784790e-01
+ <_>
+
+ 0 -1 90 8.8120000436902046e-03
+
+ 3.0396699905395508e-01 -9.5991098880767822e-01
+ <_>
+
+ 0 -1 91 -2.3507999256253242e-02
+
+ 1.2487529516220093e+00 4.6227999031543732e-02
+ <_>
+
+ 0 -1 92 7.0039997808635235e-03
+
+ -5.9442102909088135e-01 5.3963297605514526e-01
+ <_>
+
+ 0 -1 93 3.3851999789476395e-02
+
+ 2.8496098518371582e-01 -1.4895249605178833e+00
+ <_>
+
+ 0 -1 94 -3.2530000898987055e-03
+
+ 4.8120799660682678e-01 -5.2712398767471313e-01
+ <_>
+
+ 0 -1 95 2.9097000136971474e-02
+
+ 2.6743900775909424e-01 -1.6007850170135498e+00
+ <_>
+
+ 0 -1 96 -8.4790000692009926e-03
+
+ -1.3107639551162720e+00 1.5243099629878998e-01
+ <_>
+
+ 0 -1 97 -1.0795000009238720e-02
+
+ 4.5613598823547363e-01 -7.2050899267196655e-01
+ <_>
+
+ 0 -1 98 -2.4620000272989273e-02
+
+ -1.7320619821548462e+00 6.8363003432750702e-02
+ <_>
+
+ 0 -1 99 3.7380000576376915e-03
+
+ -1.9303299486637115e-01 6.8243497610092163e-01
+ <_>
+
+ 0 -1 100 -1.2264000251889229e-02
+
+ -1.6095290184020996e+00 7.5268000364303589e-02
+ <_>
+
+ 0 -1 101 -4.8670000396668911e-03
+
+ 7.4286502599716187e-01 -2.1510200202465057e-01
+ <_>
+
+ 0 -1 102 7.6725997030735016e-02
+
+ -2.6835098862648010e-01 1.3094140291213989e+00
+ <_>
+
+ 0 -1 103 2.8578000143170357e-02
+
+ -5.8793000876903534e-02 1.2196329832077026e+00
+ <_>
+
+ 0 -1 104 1.9694000482559204e-02
+
+ -3.5142898559570312e-01 8.4926998615264893e-01
+ <_>
+
+ 0 -1 105 -2.9093999415636063e-02
+
+ -1.0507299900054932e+00 2.9806300997734070e-01
+ <_>
+
+ 0 -1 106 -2.9144000262022018e-02
+
+ 8.2547801733016968e-01 -3.2687199115753174e-01
+ <_>
+
+ 0 -1 107 1.9741000607609749e-02
+
+ 2.0452600717544556e-01 -8.3760201930999756e-01
+ <_>
+
+ 0 -1 108 4.3299999088048935e-03
+
+ 2.0577900111675262e-01 -6.6829800605773926e-01
+ <_>
+
+ 0 -1 109 -3.5500999540090561e-02
+
+ -1.2969900369644165e+00 1.3897499442100525e-01
+ <_>
+
+ 0 -1 110 -1.6172999516129494e-02
+
+ -1.3110569715499878e+00 7.5751997530460358e-02
+ <_>
+
+ 0 -1 111 -2.2151000797748566e-02
+
+ -1.0524389743804932e+00 1.9241100549697876e-01
+ <_>
+
+ 0 -1 112 -2.2707000374794006e-02
+
+ -1.3735309839248657e+00 6.6780999302864075e-02
+ <_>
+
+ 0 -1 113 1.6607999801635742e-02
+
+ -3.7135999649763107e-02 7.7846401929855347e-01
+ <_>
+
+ 0 -1 114 -1.3309000059962273e-02
+
+ -9.9850702285766602e-01 1.2248100340366364e-01
+ <_>
+
+ 0 -1 115 -3.3732000738382339e-02
+
+ 1.4461359977722168e+00 1.3151999562978745e-02
+ <_>
+
+ 0 -1 116 1.6935000196099281e-02
+
+ -3.7121298909187317e-01 5.2842199802398682e-01
+ <_>
+
+ 0 -1 117 3.3259999472647905e-03
+
+ -5.7568502426147461e-01 3.9261901378631592e-01
+ <_>
+
+ 0 -1 118 8.3644002676010132e-02
+
+ 1.6116000711917877e-02 -2.1173279285430908e+00
+ <_>
+
+ 0 -1 119 2.5785198807716370e-01
+
+ -8.1609003245830536e-02 9.8782497644424438e-01
+ <_>
+
+ 0 -1 120 -3.6566998809576035e-02
+
+ -1.1512110233306885e+00 9.6459001302719116e-02
+ <_>
+
+ 0 -1 121 -1.6445999965071678e-02
+
+ 3.7315499782562256e-01 -1.4585399627685547e-01
+ <_>
+
+ 0 -1 122 -3.7519999314099550e-03
+
+ 2.6179298758506775e-01 -5.8156698942184448e-01
+ <_>
+
+ 0 -1 123 -6.3660000450909138e-03
+
+ 7.5477397441864014e-01 -1.7055200040340424e-01
+ <_>
+
+ 0 -1 124 -3.8499999791383743e-03
+
+ 2.2653999924659729e-01 -6.3876402378082275e-01
+ <_>
+
+ 0 -1 125 -4.5494001358747482e-02
+
+ -1.2640299797058105e+00 2.5260698795318604e-01
+ <_>
+
+ 0 -1 126 -2.3941000923514366e-02
+
+ 8.7068402767181396e-01 -2.7104699611663818e-01
+ <_>
+
+ 0 -1 127 -7.7558003365993500e-02
+
+ -1.3901610374450684e+00 2.3612299561500549e-01
+ <_>
+
+ 0 -1 128 2.3614000529050827e-02
+
+ 6.6140003502368927e-02 -1.2645419836044312e+00
+ <_>
+
+ 0 -1 129 -2.5750000495463610e-03
+
+ -5.3841698169708252e-01 3.0379098653793335e-01
+ <_>
+
+ 0 -1 130 1.2010800093412399e-01
+
+ -3.5343000292778015e-01 5.2866202592849731e-01
+ <_>
+
+ 0 -1 131 2.2899999748915434e-03
+
+ -5.8701997995376587e-01 2.4061000347137451e-01
+ <_>
+
+ 0 -1 132 6.9716997444629669e-02
+
+ -3.3348900079727173e-01 5.1916301250457764e-01
+ <_>
+
+ 0 -1 133 -4.6670001000165939e-02
+
+ 6.9795399904251099e-01 -1.4895999804139137e-02
+ <_>
+
+ 0 -1 134 -5.0129000097513199e-02
+
+ 8.6146199703216553e-01 -2.5986000895500183e-01
+ <_>
+
+ 0 -1 135 3.0147999525070190e-02
+
+ 1.9332799315452576e-01 -5.9131097793579102e-01
+ <_>
+ 53
+ -4.1299300193786621e+00
+
+ <_>
+
+ 0 -1 136 9.1085001826286316e-02
+
+ -8.9233100414276123e-01 1.0434230566024780e+00
+ <_>
+
+ 0 -1 137 1.2818999588489532e-02
+
+ -1.2597670555114746e+00 5.5317097902297974e-01
+ <_>
+
+ 0 -1 138 1.5931999310851097e-02
+
+ -8.6254400014877319e-01 6.3731801509857178e-01
+ <_>
+
+ 0 -1 139 2.2780001163482666e-03
+
+ -7.4639201164245605e-01 5.3155601024627686e-01
+ <_>
+
+ 0 -1 140 3.1840998679399490e-02
+
+ -1.2650489807128906e+00 3.6153900623321533e-01
+ <_>
+
+ 0 -1 141 2.6960000395774841e-03
+
+ -9.8290401697158813e-01 3.6013001203536987e-01
+ <_>
+
+ 0 -1 142 -1.2055000290274620e-02
+
+ 6.4068400859832764e-01 -5.0125002861022949e-01
+ <_>
+
+ 0 -1 143 2.1324999630451202e-02
+
+ -2.4034999310970306e-01 8.5448002815246582e-01
+ <_>
+
+ 0 -1 144 3.0486000701785088e-02
+
+ -3.4273600578308105e-01 1.1428849697113037e+00
+ <_>
+
+ 0 -1 145 -4.5079998672008514e-02
+
+ 1.0976949930191040e+00 -1.7974600195884705e-01
+ <_>
+
+ 0 -1 146 -7.1700997650623322e-02
+
+ 1.5735000371932983e+00 -3.1433498859405518e-01
+ <_>
+
+ 0 -1 147 5.9218000620603561e-02
+
+ -2.7582401037216187e-01 1.0448570251464844e+00
+ <_>
+
+ 0 -1 148 6.7010000348091125e-03
+
+ -1.0974019765853882e+00 1.9801199436187744e-01
+ <_>
+
+ 0 -1 149 4.1046999394893646e-02
+
+ 3.0547699332237244e-01 -1.3287999629974365e+00
+ <_>
+
+ 0 -1 150 -8.5499999113380909e-04
+
+ 2.5807100534439087e-01 -7.0052897930145264e-01
+ <_>
+
+ 0 -1 151 -3.0360000208020210e-02
+
+ -1.2306419610977173e+00 2.2609399259090424e-01
+ <_>
+
+ 0 -1 152 -1.2930000200867653e-02
+
+ 4.0758600831031799e-01 -5.1234501600265503e-01
+ <_>
+
+ 0 -1 153 3.7367999553680420e-02
+
+ -9.4755001366138458e-02 6.1765098571777344e-01
+ <_>
+
+ 0 -1 154 2.4434000253677368e-02
+
+ -4.1100600361824036e-01 4.7630500793457031e-01
+ <_>
+
+ 0 -1 155 5.7007998228073120e-02
+
+ 2.5249299407005310e-01 -6.8669801950454712e-01
+ <_>
+
+ 0 -1 156 -1.6313999891281128e-02
+
+ -9.3928402662277222e-01 1.1448100209236145e-01
+ <_>
+
+ 0 -1 157 -1.7648899555206299e-01
+
+ 1.2451089620590210e+00 -5.6519001722335815e-02
+ <_>
+
+ 0 -1 158 1.7614600062370300e-01
+
+ -3.2528200745582581e-01 8.2791501283645630e-01
+ <_>
+
+ 0 -1 159 -7.3910001665353775e-03
+
+ 3.4783700108528137e-01 -1.7929099500179291e-01
+ <_>
+
+ 0 -1 160 6.0890998691320419e-02
+
+ 5.5098000913858414e-02 -1.5480779409408569e+00
+ <_>
+
+ 0 -1 161 -2.9123000800609589e-02
+
+ -1.0255639553070068e+00 2.4106900393962860e-01
+ <_>
+
+ 0 -1 162 -4.5648999512195587e-02
+
+ 1.0301599502563477e+00 -3.1672099232673645e-01
+ <_>
+
+ 0 -1 163 3.7333000451326370e-02
+
+ 2.1620599925518036e-01 -8.2589900493621826e-01
+ <_>
+
+ 0 -1 164 -2.4411000311374664e-02
+
+ -1.5957959890365601e+00 5.1139000803232193e-02
+ <_>
+
+ 0 -1 165 -5.9806998819112778e-02
+
+ -1.0312290191650391e+00 1.3092300295829773e-01
+ <_>
+
+ 0 -1 166 -3.0106000602245331e-02
+
+ -1.4781630039215088e+00 3.7211999297142029e-02
+ <_>
+
+ 0 -1 167 7.4209999293088913e-03
+
+ -2.4024100601673126e-01 4.9333998560905457e-01
+ <_>
+
+ 0 -1 168 -2.1909999195486307e-03
+
+ 2.8941500186920166e-01 -5.7259601354598999e-01
+ <_>
+
+ 0 -1 169 2.0860999822616577e-02
+
+ -2.3148399591445923e-01 6.3765901327133179e-01
+ <_>
+
+ 0 -1 170 -6.6990000195801258e-03
+
+ -1.2107750177383423e+00 6.4018003642559052e-02
+ <_>
+
+ 0 -1 171 1.8758000805974007e-02
+
+ 2.4461300671100616e-01 -9.9786698818206787e-01
+ <_>
+
+ 0 -1 172 -4.4323001056909561e-02
+
+ -1.3699189424514771e+00 3.6051999777555466e-02
+ <_>
+
+ 0 -1 173 2.2859999909996986e-02
+
+ 2.1288399398326874e-01 -1.0397620201110840e+00
+ <_>
+
+ 0 -1 174 -9.8600005730986595e-04
+
+ 3.2443600893020630e-01 -5.4291802644729614e-01
+ <_>
+
+ 0 -1 175 1.7239000648260117e-02
+
+ -2.8323900699615479e-01 4.4468200206756592e-01
+ <_>
+
+ 0 -1 176 -3.4531001001596451e-02
+
+ -2.3107020854949951e+00 -3.1399999279528856e-03
+ <_>
+
+ 0 -1 177 6.7006997764110565e-02
+
+ 2.8715699911117554e-01 -6.4481002092361450e-01
+ <_>
+
+ 0 -1 178 2.3776899278163910e-01
+
+ -2.7174800634384155e-01 8.0219101905822754e-01
+ <_>
+
+ 0 -1 179 -1.2903000228106976e-02
+
+ -1.5317620038986206e+00 2.1423600614070892e-01
+ <_>
+
+ 0 -1 180 1.0514999739825726e-02
+
+ 7.7037997543811798e-02 -1.0581140518188477e+00
+ <_>
+
+ 0 -1 181 1.6969000920653343e-02
+
+ 1.4306700229644775e-01 -8.5828399658203125e-01
+ <_>
+
+ 0 -1 182 -7.2460002265870571e-03
+
+ -1.1020129919052124e+00 6.4906999468803406e-02
+ <_>
+
+ 0 -1 183 1.0556999593973160e-02
+
+ 1.3964000158011913e-02 6.3601499795913696e-01
+ <_>
+
+ 0 -1 184 6.1380001716315746e-03
+
+ -3.4545901417732239e-01 5.6296801567077637e-01
+ <_>
+
+ 0 -1 185 1.3158000074326992e-02
+
+ 1.9927300512790680e-01 -1.5040320158004761e+00
+ <_>
+
+ 0 -1 186 3.1310000922530890e-03
+
+ -4.0903699398040771e-01 3.7796398997306824e-01
+ <_>
+
+ 0 -1 187 -1.0920699685811996e-01
+
+ -2.2227079868316650e+00 1.2178199738264084e-01
+ <_>
+
+ 0 -1 188 8.1820003688335419e-03
+
+ -2.8652000427246094e-01 6.7890799045562744e-01
+ <_>
+ 62
+ -4.0218091011047363e+00
+
+ <_>
+
+ 0 -1 189 3.1346999108791351e-02
+
+ -8.8884598016738892e-01 9.4936800003051758e-01
+ <_>
+
+ 0 -1 190 3.1918000429868698e-02
+
+ -1.1146880388259888e+00 4.8888999223709106e-01
+ <_>
+
+ 0 -1 191 6.5939999185502529e-03
+
+ -1.0097689628601074e+00 4.9723801016807556e-01
+ <_>
+
+ 0 -1 192 2.6148000732064247e-02
+
+ 2.5991299748420715e-01 -1.2537480592727661e+00
+ <_>
+
+ 0 -1 193 1.2845000252127647e-02
+
+ -5.7138597965240479e-01 5.9659498929977417e-01
+ <_>
+
+ 0 -1 194 2.6344999670982361e-02
+
+ -5.5203199386596680e-01 3.0217400193214417e-01
+ <_>
+
+ 0 -1 195 -1.5083000063896179e-02
+
+ -1.2871240377426147e+00 2.2354200482368469e-01
+ <_>
+
+ 0 -1 196 -3.8887001574039459e-02
+
+ 1.7425049543380737e+00 -9.9747002124786377e-02
+ <_>
+
+ 0 -1 197 -5.7029998861253262e-03
+
+ -1.0523240566253662e+00 1.8362599611282349e-01
+ <_>
+
+ 0 -1 198 -1.4860000228509307e-03
+
+ 5.6784200668334961e-01 -4.6742001175880432e-01
+ <_>
+
+ 0 -1 199 -2.8486000373959541e-02
+
+ 1.3082909584045410e+00 -2.6460900902748108e-01
+ <_>
+
+ 0 -1 200 6.6224999725818634e-02
+
+ -4.6210700273513794e-01 4.1749599575996399e-01
+ <_>
+
+ 0 -1 201 8.8569996878504753e-03
+
+ -4.1474899649620056e-01 5.9204798936843872e-01
+ <_>
+
+ 0 -1 202 1.1355999857187271e-02
+
+ 3.6103099584579468e-01 -4.5781201124191284e-01
+ <_>
+
+ 0 -1 203 -2.7679998893290758e-03
+
+ -8.9238899946212769e-01 1.4199000597000122e-01
+ <_>
+
+ 0 -1 204 1.1246999725699425e-02
+
+ 2.9353401064872742e-01 -9.7330600023269653e-01
+ <_>
+
+ 0 -1 205 7.1970000863075256e-03
+
+ -7.9334902763366699e-01 1.8313400447368622e-01
+ <_>
+
+ 0 -1 206 3.1768999993801117e-02
+
+ 1.5523099899291992e-01 -1.3245639801025391e+00
+ <_>
+
+ 0 -1 207 2.5173999369144440e-02
+
+ 3.4214999526739120e-02 -2.0948131084442139e+00
+ <_>
+
+ 0 -1 208 7.5360001064836979e-03
+
+ -3.9450600743293762e-01 5.1333999633789062e-01
+ <_>
+
+ 0 -1 209 3.2873000949621201e-02
+
+ 8.8372997939586639e-02 -1.2814120054244995e+00
+ <_>
+
+ 0 -1 210 -2.7379998937249184e-03
+
+ 5.5286502838134766e-01 -4.6384999155998230e-01
+ <_>
+
+ 0 -1 211 -3.8075000047683716e-02
+
+ -1.8497270345687866e+00 4.5944001525640488e-02
+ <_>
+
+ 0 -1 212 -3.8984000682830811e-02
+
+ -4.8223701119422913e-01 3.4760600328445435e-01
+ <_>
+
+ 0 -1 213 2.8029999230057001e-03
+
+ -4.5154699683189392e-01 4.2806300520896912e-01
+ <_>
+
+ 0 -1 214 -5.4145999252796173e-02
+
+ -8.4520798921585083e-01 1.6674900054931641e-01
+ <_>
+
+ 0 -1 215 -8.3280000835657120e-03
+
+ 3.5348299145698547e-01 -4.7163200378417969e-01
+ <_>
+
+ 0 -1 216 3.3778000622987747e-02
+
+ 1.8463100492954254e-01 -1.6686669588088989e+00
+ <_>
+
+ 0 -1 217 -1.1238099634647369e-01
+
+ -1.2521569728851318e+00 3.5992000252008438e-02
+ <_>
+
+ 0 -1 218 -1.0408000089228153e-02
+
+ -8.1620401144027710e-01 2.3428599536418915e-01
+ <_>
+
+ 0 -1 219 -4.9439999274909496e-03
+
+ -9.2584699392318726e-01 1.0034800320863724e-01
+ <_>
+
+ 0 -1 220 -9.3029998242855072e-03
+
+ 5.6499302387237549e-01 -1.8881900608539581e-01
+ <_>
+
+ 0 -1 221 -1.1749999597668648e-02
+
+ 8.0302399396896362e-01 -3.8277000188827515e-01
+ <_>
+
+ 0 -1 222 -2.3217000067234039e-02
+
+ -8.4926998615264893e-01 1.9671200215816498e-01
+ <_>
+
+ 0 -1 223 1.6866000369191170e-02
+
+ -4.0591898560523987e-01 5.0695300102233887e-01
+ <_>
+
+ 0 -1 224 -2.4031000211834908e-02
+
+ -1.5297520160675049e+00 2.3344999551773071e-01
+ <_>
+
+ 0 -1 225 -3.6945998668670654e-02
+
+ 6.3007700443267822e-01 -3.1780400872230530e-01
+ <_>
+
+ 0 -1 226 -6.1563998460769653e-02
+
+ 5.8627897500991821e-01 -1.2107999995350838e-02
+ <_>
+
+ 0 -1 227 2.1661000326275826e-02
+
+ -2.5623700022697449e-01 1.0409849882125854e+00
+ <_>
+
+ 0 -1 228 -3.6710000131279230e-03
+
+ 2.9171100258827209e-01 -8.3287298679351807e-01
+ <_>
+
+ 0 -1 229 4.4849000871181488e-02
+
+ -3.9633199572563171e-01 4.5662000775337219e-01
+ <_>
+
+ 0 -1 230 5.7195000350475311e-02
+
+ 2.1023899316787720e-01 -1.5004800558090210e+00
+ <_>
+
+ 0 -1 231 -1.1342000216245651e-02
+
+ 4.4071298837661743e-01 -3.8653799891471863e-01
+ <_>
+
+ 0 -1 232 -1.2004000134766102e-02
+
+ 9.3954598903656006e-01 -1.0589499771595001e-01
+ <_>
+
+ 0 -1 233 2.2515999153256416e-02
+
+ 9.4480002298951149e-03 -1.6799509525299072e+00
+ <_>
+
+ 0 -1 234 -1.9809000194072723e-02
+
+ -1.0133639574050903e+00 2.4146600067615509e-01
+ <_>
+
+ 0 -1 235 1.5891000628471375e-02
+
+ -3.7507599592208862e-01 4.6614098548889160e-01
+ <_>
+
+ 0 -1 236 -9.1420002281665802e-03
+
+ -8.0484098196029663e-01 1.7816999554634094e-01
+ <_>
+
+ 0 -1 237 -4.4740000739693642e-03
+
+ -1.0562069416046143e+00 7.3305003345012665e-02
+ <_>
+
+ 0 -1 238 1.2742500007152557e-01
+
+ 2.0165599882602692e-01 -1.5467929840087891e+00
+ <_>
+
+ 0 -1 239 4.7703001648187637e-02
+
+ -3.7937799096107483e-01 3.7885999679565430e-01
+ <_>
+
+ 0 -1 240 5.3608000278472900e-02
+
+ 2.1220499277114868e-01 -1.2399710416793823e+00
+ <_>
+
+ 0 -1 241 -3.9680998772382736e-02
+
+ -1.0257550477981567e+00 5.1282998174428940e-02
+ <_>
+
+ 0 -1 242 -6.7327000200748444e-02
+
+ -1.0304750204086304e+00 2.3005299270153046e-01
+ <_>
+
+ 0 -1 243 1.3337600231170654e-01
+
+ -2.0869000256061554e-01 1.2272510528564453e+00
+ <_>
+
+ 0 -1 244 -2.0919300615787506e-01
+
+ 8.7929898500442505e-01 -4.4254999607801437e-02
+ <_>
+
+ 0 -1 245 -6.5589003264904022e-02
+
+ 1.0443429946899414e+00 -2.1682099997997284e-01
+ <_>
+
+ 0 -1 246 6.1882998794317245e-02
+
+ 1.3798199594020844e-01 -1.9009059667587280e+00
+ <_>
+
+ 0 -1 247 -2.5578999891877174e-02
+
+ -1.6607600450515747e+00 5.8439997956156731e-03
+ <_>
+
+ 0 -1 248 -3.4827001392841339e-02
+
+ 7.9940402507781982e-01 -8.2406997680664062e-02
+ <_>
+
+ 0 -1 249 -1.8209999427199364e-02
+
+ -9.6073997020721436e-01 6.6320002079010010e-02
+ <_>
+
+ 0 -1 250 1.5070999972522259e-02
+
+ 1.9899399578571320e-01 -7.6433002948760986e-01
+ <_>
+ 72
+ -3.8832089900970459e+00
+
+ <_>
+
+ 0 -1 251 4.6324998140335083e-02
+
+ -1.0362670421600342e+00 8.2201498746871948e-01
+ <_>
+
+ 0 -1 252 1.5406999737024307e-02
+
+ -1.2327589988708496e+00 2.9647698998451233e-01
+ <_>
+
+ 0 -1 253 1.2808999978005886e-02
+
+ -7.5852298736572266e-01 5.7985502481460571e-01
+ <_>
+
+ 0 -1 254 4.9150999635457993e-02
+
+ -3.8983899354934692e-01 8.9680302143096924e-01
+ <_>
+
+ 0 -1 255 1.2621000409126282e-02
+
+ -7.1799302101135254e-01 5.0440901517868042e-01
+ <_>
+
+ 0 -1 256 -1.8768999725580215e-02
+
+ 5.5147600173950195e-01 -7.0555400848388672e-01
+ <_>
+
+ 0 -1 257 4.1965000331401825e-02
+
+ -4.4782099127769470e-01 7.0985502004623413e-01
+ <_>
+
+ 0 -1 258 -5.1401998847723007e-02
+
+ -1.0932120084762573e+00 2.6701900362968445e-01
+ <_>
+
+ 0 -1 259 -7.0960998535156250e-02
+
+ 8.3618402481079102e-01 -3.8318100571632385e-01
+ <_>
+
+ 0 -1 260 1.6745999455451965e-02
+
+ -2.5733101367950439e-01 2.5966501235961914e-01
+ <_>
+
+ 0 -1 261 -6.2400000169873238e-03
+
+ 3.1631499528884888e-01 -5.8796900510787964e-01
+ <_>
+
+ 0 -1 262 -3.9397999644279480e-02
+
+ -1.0491210222244263e+00 1.6822400689125061e-01
+ <_>
+
+ 0 -1 263 0.
+
+ 1.6144199669361115e-01 -8.7876898050308228e-01
+ <_>
+
+ 0 -1 264 -2.2307999432086945e-02
+
+ -6.9053500890731812e-01 2.3607000708580017e-01
+ <_>
+
+ 0 -1 265 1.8919999711215496e-03
+
+ 2.4989199638366699e-01 -5.6583297252655029e-01
+ <_>
+
+ 0 -1 266 1.0730000212788582e-03
+
+ -5.0415802001953125e-01 3.8374501466751099e-01
+ <_>
+
+ 0 -1 267 3.9230998605489731e-02
+
+ 4.2619001120328903e-02 -1.3875889778137207e+00
+ <_>
+
+ 0 -1 268 6.2238000333309174e-02
+
+ 1.4119400084018707e-01 -1.0688860416412354e+00
+ <_>
+
+ 0 -1 269 2.1399999968707561e-03
+
+ -8.9622402191162109e-01 1.9796399772167206e-01
+ <_>
+
+ 0 -1 270 9.1800000518560410e-04
+
+ -4.5337298512458801e-01 4.3532699346542358e-01
+ <_>
+
+ 0 -1 271 -6.9169998168945312e-03
+
+ 3.3822798728942871e-01 -4.4793000817298889e-01
+ <_>
+
+ 0 -1 272 -2.3866999894380569e-02
+
+ -7.8908598423004150e-01 2.2511799633502960e-01
+ <_>
+
+ 0 -1 273 -1.0262800008058548e-01
+
+ -2.2831439971923828e+00 -5.3960001096129417e-03
+ <_>
+
+ 0 -1 274 -9.5239998772740364e-03
+
+ 3.9346700906753540e-01 -5.2242201566696167e-01
+ <_>
+
+ 0 -1 275 3.9877001196146011e-02
+
+ 3.2799001783132553e-02 -1.5079489946365356e+00
+ <_>
+
+ 0 -1 276 -1.3144999742507935e-02
+
+ -1.0839990377426147e+00 1.8482400476932526e-01
+ <_>
+
+ 0 -1 277 -5.0590999424457550e-02
+
+ -1.8822289705276489e+00 -2.2199999075382948e-03
+ <_>
+
+ 0 -1 278 2.4917000904679298e-02
+
+ 1.4593400061130524e-01 -2.2196519374847412e+00
+ <_>
+
+ 0 -1 279 -7.6370001770555973e-03
+
+ -1.0164569616317749e+00 5.8797001838684082e-02
+ <_>
+
+ 0 -1 280 4.2911998927593231e-02
+
+ 1.5443000197410583e-01 -1.1843889951705933e+00
+ <_>
+
+ 0 -1 281 2.3000000510364771e-04
+
+ -7.7305799722671509e-01 1.2189900130033493e-01
+ <_>
+
+ 0 -1 282 9.0929996222257614e-03
+
+ -1.1450099945068359e-01 7.1091300249099731e-01
+ <_>
+
+ 0 -1 283 1.1145000346004963e-02
+
+ 7.0000998675823212e-02 -1.0534820556640625e+00
+ <_>
+
+ 0 -1 284 -5.2453000098466873e-02
+
+ -1.7594360113143921e+00 1.9523799419403076e-01
+ <_>
+
+ 0 -1 285 -2.3020699620246887e-01
+
+ 9.5840299129486084e-01 -2.5045698881149292e-01
+ <_>
+
+ 0 -1 286 -1.6365999355912209e-02
+
+ 4.6731901168823242e-01 -2.1108399331569672e-01
+ <_>
+
+ 0 -1 287 -1.7208000645041466e-02
+
+ 7.0835697650909424e-01 -2.8018298745155334e-01
+ <_>
+
+ 0 -1 288 -3.6648001521825790e-02
+
+ -1.1013339757919312e+00 2.4341100454330444e-01
+ <_>
+
+ 0 -1 289 -1.0304999537765980e-02
+
+ -1.0933129787445068e+00 5.6258998811244965e-02
+ <_>
+
+ 0 -1 290 -1.3713000342249870e-02
+
+ -2.6438099145889282e-01 1.9821000099182129e-01
+ <_>
+
+ 0 -1 291 2.9308000579476357e-02
+
+ -2.2142399847507477e-01 1.0525950193405151e+00
+ <_>
+
+ 0 -1 292 2.4077000096440315e-02
+
+ 1.8485699594020844e-01 -1.7203969955444336e+00
+ <_>
+
+ 0 -1 293 6.1280000954866409e-03
+
+ -9.2721498012542725e-01 5.8752998709678650e-02
+ <_>
+
+ 0 -1 294 -2.2377999499440193e-02
+
+ 1.9646559953689575e+00 2.7785999700427055e-02
+ <_>
+
+ 0 -1 295 -7.0440000854432583e-03
+
+ 2.1427600085735321e-01 -4.8407599329948425e-01
+ <_>
+
+ 0 -1 296 -4.0603000670671463e-02
+
+ -1.1754349470138550e+00 1.6061200201511383e-01
+ <_>
+
+ 0 -1 297 -2.4466000497341156e-02
+
+ -1.1239900588989258e+00 4.1110001504421234e-02
+ <_>
+
+ 0 -1 298 2.5309999473392963e-03
+
+ -1.7169700562953949e-01 3.2178801298141479e-01
+ <_>
+
+ 0 -1 299 -1.9588999450206757e-02
+
+ 8.2720202207565308e-01 -2.6376700401306152e-01
+ <_>
+
+ 0 -1 300 -2.9635999351739883e-02
+
+ -1.1524770259857178e+00 1.4999300241470337e-01
+ <_>
+
+ 0 -1 301 -1.5030000358819962e-02
+
+ -1.0491830110549927e+00 4.0160998702049255e-02
+ <_>
+
+ 0 -1 302 -6.0715001076459885e-02
+
+ -1.0903840065002441e+00 1.5330800414085388e-01
+ <_>
+
+ 0 -1 303 -1.2790000066161156e-02
+
+ 4.2248600721359253e-01 -4.2399200797080994e-01
+ <_>
+
+ 0 -1 304 -2.0247999578714371e-02
+
+ -9.1866999864578247e-01 1.8485699594020844e-01
+ <_>
+
+ 0 -1 305 -3.0683999881148338e-02
+
+ -1.5958670377731323e+00 2.5760000571608543e-03
+ <_>
+
+ 0 -1 306 -2.0718000829219818e-02
+
+ -6.6299998760223389e-01 3.1037199497222900e-01
+ <_>
+
+ 0 -1 307 -1.7290000105276704e-03
+
+ 1.9183400273323059e-01 -6.5084999799728394e-01
+ <_>
+
+ 0 -1 308 -3.1394001096487045e-02
+
+ -6.3643002510070801e-01 1.5408399701118469e-01
+ <_>
+
+ 0 -1 309 1.9003000110387802e-02
+
+ -1.8919399380683899e-01 1.5294510126113892e+00
+ <_>
+
+ 0 -1 310 6.1769997701048851e-03
+
+ -1.0597900301218033e-01 6.4859598875045776e-01
+ <_>
+
+ 0 -1 311 -1.0165999643504620e-02
+
+ -1.0802700519561768e+00 3.7176001816987991e-02
+ <_>
+
+ 0 -1 312 -1.4169999631121755e-03
+
+ 3.4157499670982361e-01 -9.7737997770309448e-02
+ <_>
+
+ 0 -1 313 -4.0799998678267002e-03
+
+ 4.7624599933624268e-01 -3.4366300702095032e-01
+ <_>
+
+ 0 -1 314 -4.4096998870372772e-02
+
+ 9.7634297609329224e-01 -1.9173000007867813e-02
+ <_>
+
+ 0 -1 315 -6.0669999569654465e-02
+
+ -2.1752851009368896e+00 -2.8925999999046326e-02
+ <_>
+
+ 0 -1 316 -3.2931998372077942e-02
+
+ -6.4383101463317871e-01 1.6494099795818329e-01
+ <_>
+
+ 0 -1 317 -1.4722800254821777e-01
+
+ -1.4745830297470093e+00 2.5839998852461576e-03
+ <_>
+
+ 0 -1 318 -1.1930000036954880e-02
+
+ 4.2441400885581970e-01 -1.7712600529193878e-01
+ <_>
+
+ 0 -1 319 1.4517900347709656e-01
+
+ 2.5444999337196350e-02 -1.2779400348663330e+00
+ <_>
+
+ 0 -1 320 5.1447998732328415e-02
+
+ 1.5678399801254272e-01 -1.5188430547714233e+00
+ <_>
+
+ 0 -1 321 3.1479999888688326e-03
+
+ -4.0424400568008423e-01 3.2429701089859009e-01
+ <_>
+
+ 0 -1 322 -4.3600000441074371e-02
+
+ -1.9932260513305664e+00 1.5018600225448608e-01
+ <_>
+ 83
+ -3.8424909114837646e+00
+
+ <_>
+
+ 0 -1 323 1.2899599969387054e-01
+
+ -6.2161999940872192e-01 1.1116520166397095e+00
+ <_>
+
+ 0 -1 324 -9.1261997818946838e-02
+
+ 1.0143059492111206e+00 -6.1335200071334839e-01
+ <_>
+
+ 0 -1 325 1.4271999709308147e-02
+
+ -1.0261659622192383e+00 3.9779999852180481e-01
+ <_>
+
+ 0 -1 326 3.2889999449253082e-02
+
+ -1.1386079788208008e+00 2.8690800070762634e-01
+ <_>
+
+ 0 -1 327 1.2590000405907631e-02
+
+ -5.6645601987838745e-01 4.5172399282455444e-01
+ <_>
+
+ 0 -1 328 1.4661000110208988e-02
+
+ 3.0505999922752380e-01 -6.8129599094390869e-01
+ <_>
+
+ 0 -1 329 -3.3555999398231506e-02
+
+ -1.7208939790725708e+00 6.1439000070095062e-02
+ <_>
+
+ 0 -1 330 1.4252699911594391e-01
+
+ 2.3192200064659119e-01 -1.7297149896621704e+00
+ <_>
+
+ 0 -1 331 -6.2079997733235359e-03
+
+ -1.2163300514221191e+00 1.2160199880599976e-01
+ <_>
+
+ 0 -1 332 1.8178999423980713e-02
+
+ 3.2553699612617493e-01 -8.1003999710083008e-01
+ <_>
+
+ 0 -1 333 2.5036999955773354e-02
+
+ -3.1698799133300781e-01 6.7361402511596680e-01
+ <_>
+
+ 0 -1 334 4.6560999006032944e-02
+
+ -1.1089800298213959e-01 8.4082502126693726e-01
+ <_>
+
+ 0 -1 335 -8.9999996125698090e-03
+
+ 3.9574500918388367e-01 -4.7624599933624268e-01
+ <_>
+
+ 0 -1 336 4.0805999189615250e-02
+
+ -1.8000000272877514e-04 9.4570702314376831e-01
+ <_>
+
+ 0 -1 337 -3.4221999347209930e-02
+
+ 7.5206297636032104e-01 -3.1531500816345215e-01
+ <_>
+
+ 0 -1 338 -3.9716001600027084e-02
+
+ -8.3139598369598389e-01 1.7744399607181549e-01
+ <_>
+
+ 0 -1 339 2.5170000735670328e-03
+
+ -5.9377998113632202e-01 2.4657000601291656e-01
+ <_>
+
+ 0 -1 340 2.7428999543190002e-02
+
+ 1.5998399257659912e-01 -4.2781999707221985e-01
+ <_>
+
+ 0 -1 341 3.4986000508069992e-02
+
+ 3.5055998712778091e-02 -1.5988600254058838e+00
+ <_>
+
+ 0 -1 342 4.4970000162720680e-03
+
+ -5.2034300565719604e-01 3.7828299403190613e-01
+ <_>
+
+ 0 -1 343 2.7699999045580626e-03
+
+ -5.3182601928710938e-01 2.4951000511646271e-01
+ <_>
+
+ 0 -1 344 3.5174001008272171e-02
+
+ 1.9983400404453278e-01 -1.4446129798889160e+00
+ <_>
+
+ 0 -1 345 2.5970999151468277e-02
+
+ 4.4426999986171722e-02 -1.3622980117797852e+00
+ <_>
+
+ 0 -1 346 -1.5783999115228653e-02
+
+ -9.1020399332046509e-01 2.7190300822257996e-01
+ <_>
+
+ 0 -1 347 -7.5880000367760658e-03
+
+ 9.2064999043941498e-02 -8.1628900766372681e-01
+ <_>
+
+ 0 -1 348 2.0754000172019005e-02
+
+ 2.1185700595378876e-01 -7.4729001522064209e-01
+ <_>
+
+ 0 -1 349 5.9829000383615494e-02
+
+ -2.7301099896430969e-01 8.0923300981521606e-01
+ <_>
+
+ 0 -1 350 3.9039000868797302e-02
+
+ -1.0432299971580505e-01 8.6226201057434082e-01
+ <_>
+
+ 0 -1 351 2.1665999665856361e-02
+
+ 6.2709003686904907e-02 -9.8894298076629639e-01
+ <_>
+
+ 0 -1 352 -2.7496999129652977e-02
+
+ -9.2690998315811157e-01 1.5586300194263458e-01
+ <_>
+
+ 0 -1 353 1.0462000034749508e-02
+
+ 1.3418099284172058e-01 -7.0386397838592529e-01
+ <_>
+
+ 0 -1 354 2.4870999157428741e-02
+
+ 1.9706700742244720e-01 -4.0263301134109497e-01
+ <_>
+
+ 0 -1 355 -1.6036000102758408e-02
+
+ -1.1409829854965210e+00 7.3997996747493744e-02
+ <_>
+
+ 0 -1 356 4.8627000302076340e-02
+
+ 1.6990399360656738e-01 -7.2152197360992432e-01
+ <_>
+
+ 0 -1 357 1.2619999470189214e-03
+
+ -4.7389799356460571e-01 2.6254999637603760e-01
+ <_>
+
+ 0 -1 358 -8.8035002350807190e-02
+
+ -2.1606519222259521e+00 1.4554800093173981e-01
+ <_>
+
+ 0 -1 359 1.8356999382376671e-02
+
+ 4.4750999659299850e-02 -1.0766370296478271e+00
+ <_>
+
+ 0 -1 360 3.5275001078844070e-02
+
+ -3.2919000834226608e-02 1.2153890132904053e+00
+ <_>
+
+ 0 -1 361 -2.0392900705337524e-01
+
+ -1.3187999725341797e+00 1.5503999777138233e-02
+ <_>
+
+ 0 -1 362 -1.6619000583887100e-02
+
+ 3.6850199103355408e-01 -1.5283699333667755e-01
+ <_>
+
+ 0 -1 363 3.7739001214504242e-02
+
+ -2.5727799534797668e-01 7.0655298233032227e-01
+ <_>
+
+ 0 -1 364 2.2720000706613064e-03
+
+ -7.7602997422218323e-02 3.3367800712585449e-01
+ <_>
+
+ 0 -1 365 -1.4802999794483185e-02
+
+ -7.8524798154830933e-01 7.6934002339839935e-02
+ <_>
+
+ 0 -1 366 -4.8319000750780106e-02
+
+ 1.7022320032119751e+00 4.9722000956535339e-02
+ <_>
+
+ 0 -1 367 -2.9539000242948532e-02
+
+ 7.7670699357986450e-01 -2.4534299969673157e-01
+ <_>
+
+ 0 -1 368 -4.6169001609086990e-02
+
+ -1.4922779798507690e+00 1.2340000271797180e-01
+ <_>
+
+ 0 -1 369 -2.8064999729394913e-02
+
+ -2.1345369815826416e+00 -2.5797000154852867e-02
+ <_>
+
+ 0 -1 370 -5.7339998893439770e-03
+
+ 5.6982600688934326e-01 -1.2056600302457809e-01
+ <_>
+
+ 0 -1 371 -1.0111000388860703e-02
+
+ 6.7911398410797119e-01 -2.6638001203536987e-01
+ <_>
+
+ 0 -1 372 1.1359999887645245e-02
+
+ 2.4789799749851227e-01 -6.4493000507354736e-01
+ <_>
+
+ 0 -1 373 5.1809001713991165e-02
+
+ 1.4716000296175480e-02 -1.2395579814910889e+00
+ <_>
+
+ 0 -1 374 3.3291999250650406e-02
+
+ -8.2559995353221893e-03 1.0168470144271851e+00
+ <_>
+
+ 0 -1 375 -1.4494000002741814e-02
+
+ 4.5066800713539124e-01 -3.6250999569892883e-01
+ <_>
+
+ 0 -1 376 -3.4221999347209930e-02
+
+ -9.5292502641677856e-01 2.0684599876403809e-01
+ <_>
+
+ 0 -1 377 -8.0654002726078033e-02
+
+ -2.0139501094818115e+00 -2.3084999993443489e-02
+ <_>
+
+ 0 -1 378 -8.9399999706074595e-04
+
+ 3.9572000503540039e-01 -2.9351300001144409e-01
+ <_>
+
+ 0 -1 379 9.7162000834941864e-02
+
+ -2.4980300664901733e-01 1.0859220027923584e+00
+ <_>
+
+ 0 -1 380 3.6614000797271729e-02
+
+ -5.7844001799821854e-02 1.2162159681320190e+00
+ <_>
+
+ 0 -1 381 5.1693998277187347e-02
+
+ 4.3062999844551086e-02 -1.0636160373687744e+00
+ <_>
+
+ 0 -1 382 -2.4557000026106834e-02
+
+ -4.8946800827980042e-01 1.7182900011539459e-01
+ <_>
+
+ 0 -1 383 3.2736799120903015e-01
+
+ -2.9688599705696106e-01 5.1798301935195923e-01
+ <_>
+
+ 0 -1 384 7.6959999278187752e-03
+
+ -5.9805899858474731e-01 2.4803200364112854e-01
+ <_>
+
+ 0 -1 385 1.6172200441360474e-01
+
+ -2.9613999649882317e-02 -2.3162529468536377e+00
+ <_>
+
+ 0 -1 386 -4.7889999113976955e-03
+
+ 3.7457901239395142e-01 -3.2779198884963989e-01
+ <_>
+
+ 0 -1 387 -1.8402999266982079e-02
+
+ -9.9692702293395996e-01 7.2948001325130463e-02
+ <_>
+
+ 0 -1 388 7.7665001153945923e-02
+
+ 1.4175699651241302e-01 -1.7238730192184448e+00
+ <_>
+
+ 0 -1 389 1.8921000882983208e-02
+
+ -2.1273100376129150e-01 1.0165189504623413e+00
+ <_>
+
+ 0 -1 390 -7.9397998750209808e-02
+
+ -1.3164349794387817e+00 1.4981999993324280e-01
+ <_>
+
+ 0 -1 391 -6.8037003278732300e-02
+
+ 4.9421998858451843e-01 -2.9091000556945801e-01
+ <_>
+
+ 0 -1 392 -6.1010001227259636e-03
+
+ 4.2430499196052551e-01 -3.3899301290512085e-01
+ <_>
+
+ 0 -1 393 3.1927000731229782e-02
+
+ -3.1046999618411064e-02 -2.3459999561309814e+00
+ <_>
+
+ 0 -1 394 -2.9843999072909355e-02
+
+ -7.8989601135253906e-01 1.5417699515819550e-01
+ <_>
+
+ 0 -1 395 -8.0541998147964478e-02
+
+ -2.2509229183197021e+00 -3.0906999483704567e-02
+ <_>
+
+ 0 -1 396 3.8109999150037766e-03
+
+ -2.5577300786972046e-01 2.3785500228404999e-01
+ <_>
+
+ 0 -1 397 3.3647000789642334e-02
+
+ -2.2541399300098419e-01 9.2307400703430176e-01
+ <_>
+
+ 0 -1 398 8.2809999585151672e-03
+
+ -2.8896200656890869e-01 3.1046199798583984e-01
+ <_>
+
+ 0 -1 399 1.0104399919509888e-01
+
+ -3.4864000976085663e-02 -2.7102620601654053e+00
+ <_>
+
+ 0 -1 400 -1.0009000077843666e-02
+
+ 5.9715402126312256e-01 -3.3831000328063965e-02
+ <_>
+
+ 0 -1 401 7.1919998154044151e-03
+
+ -4.7738000750541687e-01 2.2686000168323517e-01
+ <_>
+
+ 0 -1 402 2.4969000369310379e-02
+
+ 2.2877700626850128e-01 -1.0435529947280884e+00
+ <_>
+
+ 0 -1 403 2.7908000349998474e-01
+
+ -2.5818100571632385e-01 7.6780498027801514e-01
+ <_>
+
+ 0 -1 404 -4.4213000684976578e-02
+
+ -5.9798002243041992e-01 2.8039899468421936e-01
+ <_>
+
+ 0 -1 405 -1.4136999845504761e-02
+
+ 7.0987302064895630e-01 -2.5645199418067932e-01
+ <_>
+ 91
+ -3.6478610038757324e+00
+
+ <_>
+
+ 0 -1 406 1.3771200180053711e-01
+
+ -5.5870598554611206e-01 1.0953769683837891e+00
+ <_>
+
+ 0 -1 407 3.4460999071598053e-02
+
+ -7.1171897649765015e-01 5.2899599075317383e-01
+ <_>
+
+ 0 -1 408 1.8580000847578049e-02
+
+ -1.1157519817352295e+00 4.0593999624252319e-01
+ <_>
+
+ 0 -1 409 2.5041999295353889e-02
+
+ -4.0892499685287476e-01 7.4129998683929443e-01
+ <_>
+
+ 0 -1 410 5.7179000228643417e-02
+
+ -3.8054299354553223e-01 7.3647701740264893e-01
+ <_>
+
+ 0 -1 411 1.4932000078260899e-02
+
+ -6.9945502281188965e-01 3.7950998544692993e-01
+ <_>
+
+ 0 -1 412 8.8900001719594002e-03
+
+ -5.4558598995208740e-01 3.6332499980926514e-01
+ <_>
+
+ 0 -1 413 3.0435999855399132e-02
+
+ -1.0124599933624268e-01 7.9585897922515869e-01
+ <_>
+
+ 0 -1 414 -4.4160000979900360e-02
+
+ 8.4410899877548218e-01 -3.2976400852203369e-01
+ <_>
+
+ 0 -1 415 1.8461000174283981e-02
+
+ 2.6326599717140198e-01 -9.6736502647399902e-01
+ <_>
+
+ 0 -1 416 1.0614999569952488e-02
+
+ 1.5251900255680084e-01 -1.0589870214462280e+00
+ <_>
+
+ 0 -1 417 -4.5974001288414001e-02
+
+ -1.9918340444564819e+00 1.3629099726676941e-01
+ <_>
+
+ 0 -1 418 8.2900002598762512e-02
+
+ -3.2037198543548584e-01 6.0304200649261475e-01
+ <_>
+
+ 0 -1 419 -8.9130001142621040e-03
+
+ 5.9586602449417114e-01 -2.1139599382877350e-01
+ <_>
+
+ 0 -1 420 4.2814001441001892e-02
+
+ 2.2925000637769699e-02 -1.4679330587387085e+00
+ <_>
+
+ 0 -1 421 -8.7139997631311417e-03
+
+ -4.3989500403404236e-01 2.0439699292182922e-01
+ <_>
+
+ 0 -1 422 -4.3390002101659775e-03
+
+ -8.9066797494888306e-01 1.0469999909400940e-01
+ <_>
+
+ 0 -1 423 8.0749997869133949e-03
+
+ 2.1164199709892273e-01 -4.0231600403785706e-01
+ <_>
+
+ 0 -1 424 9.6739001572132111e-02
+
+ 1.3319999910891056e-02 -1.6085360050201416e+00
+ <_>
+
+ 0 -1 425 -3.0536999925971031e-02
+
+ 1.0063740015029907e+00 -1.3413299620151520e-01
+ <_>
+
+ 0 -1 426 -6.0855999588966370e-02
+
+ -1.4689979553222656e+00 9.4240000471472740e-03
+ <_>
+
+ 0 -1 427 -3.8162000477313995e-02
+
+ -8.1636399030685425e-01 2.6171201467514038e-01
+ <_>
+
+ 0 -1 428 -9.6960002556443214e-03
+
+ 1.1561699956655502e-01 -7.1693199872970581e-01
+ <_>
+
+ 0 -1 429 4.8902999609708786e-02
+
+ 1.3050499558448792e-01 -1.6448370218276978e+00
+ <_>
+
+ 0 -1 430 -4.1611999273300171e-02
+
+ -1.1795840263366699e+00 2.5017000734806061e-02
+ <_>
+
+ 0 -1 431 -2.0188000053167343e-02
+
+ 6.3188201189041138e-01 -1.0490400344133377e-01
+ <_>
+
+ 0 -1 432 -9.7900000400841236e-04
+
+ 1.8507799506187439e-01 -5.3565901517868042e-01
+ <_>
+
+ 0 -1 433 -3.3622000366449356e-02
+
+ -9.3127602338790894e-01 2.0071500539779663e-01
+ <_>
+
+ 0 -1 434 1.9455999135971069e-02
+
+ 3.8029000163078308e-02 -1.0112210512161255e+00
+ <_>
+
+ 0 -1 435 -3.1800000579096377e-04
+
+ 3.6457699537277222e-01 -2.7610900998115540e-01
+ <_>
+
+ 0 -1 436 -3.8899999344721437e-04
+
+ 1.9665899872779846e-01 -5.3410500288009644e-01
+ <_>
+
+ 0 -1 437 -9.3496002256870270e-02
+
+ -1.6772350072860718e+00 2.0727099478244781e-01
+ <_>
+
+ 0 -1 438 -7.7877998352050781e-02
+
+ -3.0760629177093506e+00 -3.5803999751806259e-02
+ <_>
+
+ 0 -1 439 1.6947999596595764e-02
+
+ 2.1447399258613586e-01 -7.1376299858093262e-01
+ <_>
+
+ 0 -1 440 -2.1459000185132027e-02
+
+ -1.1468060016632080e+00 1.5855999663472176e-02
+ <_>
+
+ 0 -1 441 -1.2865999713540077e-02
+
+ 8.3812397718429565e-01 -6.5944001078605652e-02
+ <_>
+
+ 0 -1 442 7.8220004215836525e-03
+
+ -2.8026801347732544e-01 7.9376900196075439e-01
+ <_>
+
+ 0 -1 443 1.0294400155544281e-01
+
+ 1.7832300066947937e-01 -6.8412202596664429e-01
+ <_>
+
+ 0 -1 444 -3.7487998604774475e-02
+
+ 9.6189999580383301e-01 -2.1735599637031555e-01
+ <_>
+
+ 0 -1 445 2.5505999103188515e-02
+
+ 1.0103999637067318e-02 1.2461110353469849e+00
+ <_>
+
+ 0 -1 446 6.6700001480057836e-04
+
+ -5.3488200902938843e-01 1.4746299386024475e-01
+ <_>
+
+ 0 -1 447 -2.8867900371551514e-01
+
+ 8.2172799110412598e-01 -1.4948000200092793e-02
+ <_>
+
+ 0 -1 448 9.1294996440410614e-02
+
+ -1.9605399668216705e-01 1.0803170204162598e+00
+ <_>
+
+ 0 -1 449 1.2056600302457809e-01
+
+ -2.3848999291658401e-02 1.1392610073089600e+00
+ <_>
+
+ 0 -1 450 -7.3775000870227814e-02
+
+ -1.3583840131759644e+00 -4.2039998807013035e-03
+ <_>
+
+ 0 -1 451 -3.3128000795841217e-02
+
+ -6.4483201503753662e-01 2.4142199754714966e-01
+ <_>
+
+ 0 -1 452 -4.3937001377344131e-02
+
+ 8.4285402297973633e-01 -2.0624800026416779e-01
+ <_>
+
+ 0 -1 453 1.8110199272632599e-01
+
+ 1.9212099909782410e-01 -1.2222139835357666e+00
+ <_>
+
+ 0 -1 454 -1.1850999668240547e-02
+
+ -7.2677397727966309e-01 5.2687998861074448e-02
+ <_>
+
+ 0 -1 455 4.5920000411570072e-03
+
+ -3.6305201053619385e-01 2.9223799705505371e-01
+ <_>
+
+ 0 -1 456 7.0620002225041389e-03
+
+ 5.8116000145673752e-02 -6.7161601781845093e-01
+ <_>
+
+ 0 -1 457 -2.3715000599622726e-02
+
+ 4.7142100334167480e-01 1.8580000847578049e-02
+ <_>
+
+ 0 -1 458 -6.7171998322010040e-02
+
+ -1.1331889629364014e+00 2.3780999705195427e-02
+ <_>
+
+ 0 -1 459 -6.5310001373291016e-02
+
+ 9.8253500461578369e-01 2.8362000361084938e-02
+ <_>
+
+ 0 -1 460 2.2791000083088875e-02
+
+ -2.8213700652122498e-01 5.8993399143218994e-01
+ <_>
+
+ 0 -1 461 -1.9037999212741852e-02
+
+ -6.3711500167846680e-01 2.6514598727226257e-01
+ <_>
+
+ 0 -1 462 -6.8689999170601368e-03
+
+ 3.7487301230430603e-01 -3.3232098817825317e-01
+ <_>
+
+ 0 -1 463 -4.0146000683307648e-02
+
+ -1.3048729896545410e+00 1.5724299848079681e-01
+ <_>
+
+ 0 -1 464 -4.0530998259782791e-02
+
+ -2.0458049774169922e+00 -2.6925999671220779e-02
+ <_>
+
+ 0 -1 465 -1.2253999710083008e-02
+
+ 7.7649402618408203e-01 -4.2971000075340271e-02
+ <_>
+
+ 0 -1 466 -2.7219999581575394e-02
+
+ 1.7424400150775909e-01 -4.4600901007652283e-01
+ <_>
+
+ 0 -1 467 -8.8366001844406128e-02
+
+ -1.5036419630050659e+00 1.4289900660514832e-01
+ <_>
+
+ 0 -1 468 -7.9159997403621674e-03
+
+ 2.8666698932647705e-01 -3.7923699617385864e-01
+ <_>
+
+ 0 -1 469 -4.1960000991821289e-02
+
+ 1.3846950531005859e+00 6.5026998519897461e-02
+ <_>
+
+ 0 -1 470 4.5662999153137207e-02
+
+ -2.2452299296855927e-01 7.9521000385284424e-01
+ <_>
+
+ 0 -1 471 -1.4090600609779358e-01
+
+ -1.5879319906234741e+00 1.1359000205993652e-01
+ <_>
+
+ 0 -1 472 -5.9216000139713287e-02
+
+ -1.1945960521697998e+00 -7.1640000678598881e-03
+ <_>
+
+ 0 -1 473 4.3390002101659775e-03
+
+ -1.5528699755668640e-01 4.0664499998092651e-01
+ <_>
+
+ 0 -1 474 -2.0369999110698700e-03
+
+ 2.5927901268005371e-01 -3.8368299603462219e-01
+ <_>
+
+ 0 -1 475 2.7516499161720276e-01
+
+ -8.8497996330261230e-02 7.6787501573562622e-01
+ <_>
+
+ 0 -1 476 -2.6601999998092651e-02
+
+ 7.5024497509002686e-01 -2.2621999680995941e-01
+ <_>
+
+ 0 -1 477 4.0906000882387161e-02
+
+ 1.2158600240945816e-01 -1.4566910266876221e+00
+ <_>
+
+ 0 -1 478 5.5320002138614655e-03
+
+ -3.6611500382423401e-01 2.5968599319458008e-01
+ <_>
+
+ 0 -1 479 3.1879000365734100e-02
+
+ -7.5019001960754395e-02 4.8484799265861511e-01
+ <_>
+
+ 0 -1 480 -4.1482001543045044e-02
+
+ 7.8220397233963013e-01 -2.1992200613021851e-01
+ <_>
+
+ 0 -1 481 -9.6130996942520142e-02
+
+ -8.9456301927566528e-01 1.4680700004100800e-01
+ <_>
+
+ 0 -1 482 -1.1568999849259853e-02
+
+ 8.2714098691940308e-01 -2.0275600254535675e-01
+ <_>
+
+ 0 -1 483 1.8312999978661537e-02
+
+ 1.6367999836802483e-02 2.7306801080703735e-01
+ <_>
+
+ 0 -1 484 -3.4166000783443451e-02
+
+ 1.1307320594787598e+00 -1.8810899555683136e-01
+ <_>
+
+ 0 -1 485 -2.4476999416947365e-02
+
+ -5.7791298627853394e-01 1.5812499821186066e-01
+ <_>
+
+ 0 -1 486 4.8957001417875290e-02
+
+ -2.2564999759197235e-02 -1.6373280286788940e+00
+ <_>
+
+ 0 -1 487 -2.0702999085187912e-02
+
+ -5.4512101411819458e-01 2.4086999893188477e-01
+ <_>
+
+ 0 -1 488 -2.3002000525593758e-02
+
+ -1.2236540317535400e+00 -7.3440000414848328e-03
+ <_>
+
+ 0 -1 489 6.4585000276565552e-02
+
+ 1.4695599675178528e-01 -4.4967499375343323e-01
+ <_>
+
+ 0 -1 490 1.2666000053286552e-02
+
+ -2.7873900532722473e-01 4.3876600265502930e-01
+ <_>
+
+ 0 -1 491 -1.2002999894320965e-02
+
+ -2.4289099872112274e-01 2.5350099802017212e-01
+ <_>
+
+ 0 -1 492 -2.6443999260663986e-02
+
+ -8.5864800214767456e-01 2.6025999337434769e-02
+ <_>
+
+ 0 -1 493 -2.5547999888658524e-02
+
+ 6.9287902116775513e-01 -2.1160000469535589e-03
+ <_>
+
+ 0 -1 494 3.9115000516176224e-02
+
+ -1.6589100658893585e-01 1.5209139585494995e+00
+ <_>
+
+ 0 -1 495 -6.0330000706017017e-03
+
+ 4.3856900930404663e-01 -2.1613700687885284e-01
+ <_>
+
+ 0 -1 496 -3.3936999738216400e-02
+
+ -9.7998398542404175e-01 2.2133000195026398e-02
+ <_>
+ 99
+ -3.8700489997863770e+00
+
+ <_>
+
+ 0 -1 497 4.0672998875379562e-02
+
+ -9.0474700927734375e-01 6.4410597085952759e-01
+ <_>
+
+ 0 -1 498 2.5609999895095825e-02
+
+ -7.9216998815536499e-01 5.7489997148513794e-01
+ <_>
+
+ 0 -1 499 1.9959500432014465e-01
+
+ -3.0099600553512573e-01 1.3143850564956665e+00
+ <_>
+
+ 0 -1 500 1.2404999695718288e-02
+
+ -8.9882999658584595e-01 2.9205799102783203e-01
+ <_>
+
+ 0 -1 501 3.9207998663187027e-02
+
+ -4.1955199837684631e-01 5.3463298082351685e-01
+ <_>
+
+ 0 -1 502 -3.0843999236822128e-02
+
+ 4.5793399214744568e-01 -4.4629099965095520e-01
+ <_>
+
+ 0 -1 503 -3.5523001104593277e-02
+
+ 9.1310501098632812e-01 -2.7373200654983521e-01
+ <_>
+
+ 0 -1 504 -6.1650000512599945e-02
+
+ -1.4697799682617188e+00 2.0364099740982056e-01
+ <_>
+
+ 0 -1 505 -1.1739999987185001e-02
+
+ -1.0482879877090454e+00 6.7801997065544128e-02
+ <_>
+
+ 0 -1 506 6.6933996975421906e-02
+
+ 2.9274499416351318e-01 -5.2282899618148804e-01
+ <_>
+
+ 0 -1 507 -2.0631000399589539e-02
+
+ -1.2855139970779419e+00 4.4550999999046326e-02
+ <_>
+
+ 0 -1 508 -2.2357000038027763e-02
+
+ -8.5753798484802246e-01 1.8434000015258789e-01
+ <_>
+
+ 0 -1 509 1.1500000255182385e-03
+
+ 1.6405500471591949e-01 -6.9125002622604370e-01
+ <_>
+
+ 0 -1 510 3.5872999578714371e-02
+
+ 1.5756499767303467e-01 -8.4262597560882568e-01
+ <_>
+
+ 0 -1 511 3.0659999698400497e-02
+
+ 2.1637000143527985e-02 -1.3634690046310425e+00
+ <_>
+
+ 0 -1 512 5.5559999309480190e-03
+
+ -1.6737000644207001e-01 2.5888401269912720e-01
+ <_>
+
+ 0 -1 513 -6.1160000041127205e-03
+
+ -9.7271800041198730e-01 6.6100001335144043e-02
+ <_>
+
+ 0 -1 514 -3.0316999182105064e-02
+
+ 9.8474198579788208e-01 -1.6448000445961952e-02
+ <_>
+
+ 0 -1 515 -9.7200004383921623e-03
+
+ 4.7604700922966003e-01 -3.2516700029373169e-01
+ <_>
+
+ 0 -1 516 -5.7126998901367188e-02
+
+ -9.5920699834823608e-01 1.9938200712203979e-01
+ <_>
+
+ 0 -1 517 4.0059997700154781e-03
+
+ -5.2612501382827759e-01 2.2428700327873230e-01
+ <_>
+
+ 0 -1 518 3.3734001219272614e-02
+
+ 1.7070099711418152e-01 -1.0737580060958862e+00
+ <_>
+
+ 0 -1 519 -3.4641999751329422e-02
+
+ -1.1343129873275757e+00 3.6540001630783081e-02
+ <_>
+
+ 0 -1 520 4.6923000365495682e-02
+
+ 2.5832301378250122e-01 -7.1535801887512207e-01
+ <_>
+
+ 0 -1 521 -8.7660001590847969e-03
+
+ 1.9640900194644928e-01 -5.3355097770690918e-01
+ <_>
+
+ 0 -1 522 6.5627999603748322e-02
+
+ -5.1194999366998672e-02 9.7610700130462646e-01
+ <_>
+
+ 0 -1 523 -4.4165000319480896e-02
+
+ 1.0631920099258423e+00 -2.3462599515914917e-01
+ <_>
+
+ 0 -1 524 1.7304999753832817e-02
+
+ -1.8582899868488312e-01 4.5889899134635925e-01
+ <_>
+
+ 0 -1 525 3.3135998994112015e-02
+
+ -2.9381999745965004e-02 -2.6651329994201660e+00
+ <_>
+
+ 0 -1 526 -2.1029999479651451e-02
+
+ 9.9979901313781738e-01 2.4937000125646591e-02
+ <_>
+
+ 0 -1 527 2.9783999547362328e-02
+
+ -2.9605999588966370e-02 -2.1695868968963623e+00
+ <_>
+
+ 0 -1 528 5.5291999131441116e-02
+
+ -7.5599999399855733e-04 7.4651998281478882e-01
+ <_>
+
+ 0 -1 529 -3.3597998321056366e-02
+
+ -1.5274159908294678e+00 1.1060000397264957e-02
+ <_>
+
+ 0 -1 530 1.9602999091148376e-02
+
+ 3.3574998378753662e-02 9.9526202678680420e-01
+ <_>
+
+ 0 -1 531 -2.0787000656127930e-02
+
+ 7.6612901687622070e-01 -2.4670800566673279e-01
+ <_>
+
+ 0 -1 532 3.2536000013351440e-02
+
+ 1.6263400018215179e-01 -6.1134302616119385e-01
+ <_>
+
+ 0 -1 533 -1.0788000188767910e-02
+
+ -9.7839701175689697e-01 2.8969999402761459e-02
+ <_>
+
+ 0 -1 534 -9.9560003727674484e-03
+
+ 4.6145799756050110e-01 -1.3510499894618988e-01
+ <_>
+
+ 0 -1 535 -3.7489999085664749e-03
+
+ 2.5458198785781860e-01 -5.1955598592758179e-01
+ <_>
+
+ 0 -1 536 -4.1779998689889908e-02
+
+ -8.0565100908279419e-01 1.5208500623703003e-01
+ <_>
+
+ 0 -1 537 -3.4221000969409943e-02
+
+ -1.3137799501419067e+00 -3.5800000187009573e-03
+ <_>
+
+ 0 -1 538 1.0130000300705433e-02
+
+ 2.0175799727439880e-01 -6.1339598894119263e-01
+ <_>
+
+ 0 -1 539 -8.9849002659320831e-02
+
+ 9.7632801532745361e-01 -2.0884799957275391e-01
+ <_>
+
+ 0 -1 540 2.6097999885678291e-02
+
+ -1.8807999789714813e-01 4.7705799341201782e-01
+ <_>
+
+ 0 -1 541 -3.7539999466389418e-03
+
+ -6.7980402708053589e-01 1.1288800090551376e-01
+ <_>
+
+ 0 -1 542 3.1973000615835190e-02
+
+ 1.8951700627803802e-01 -1.4967479705810547e+00
+ <_>
+
+ 0 -1 543 1.9332999363541603e-02
+
+ -2.3609900474548340e-01 8.1320500373840332e-01
+ <_>
+
+ 0 -1 544 1.9490000559017062e-03
+
+ 2.4830399453639984e-01 -6.9211997091770172e-02
+ <_>
+
+ 0 -1 545 -4.4146999716758728e-02
+
+ -1.0418920516967773e+00 4.8053000122308731e-02
+ <_>
+
+ 0 -1 546 -4.4681999832391739e-02
+
+ 5.1346302032470703e-01 -7.3799998499453068e-03
+ <_>
+
+ 0 -1 547 -1.0757499933242798e-01
+
+ 1.6202019453048706e+00 -1.8667599558830261e-01
+ <_>
+
+ 0 -1 548 -1.2846800684928894e-01
+
+ 2.9869480133056641e+00 9.5427997410297394e-02
+ <_>
+
+ 0 -1 549 -4.4757999479770660e-02
+
+ 6.0405302047729492e-01 -2.7058699727058411e-01
+ <_>
+
+ 0 -1 550 -4.3990999460220337e-02
+
+ -6.1790502071380615e-01 1.5997199714183807e-01
+ <_>
+
+ 0 -1 551 -1.2268999963998795e-01
+
+ 6.6327202320098877e-01 -2.3636999726295471e-01
+ <_>
+
+ 0 -1 552 -1.9982999190688133e-02
+
+ -1.1228660345077515e+00 1.9616700708866119e-01
+ <_>
+
+ 0 -1 553 -1.5527999959886074e-02
+
+ -1.0770269632339478e+00 2.0693000406026840e-02
+ <_>
+
+ 0 -1 554 -4.8971001058816910e-02
+
+ 8.1168299913406372e-01 -1.7252000048756599e-02
+ <_>
+
+ 0 -1 555 5.5975999683141708e-02
+
+ -2.2529000416398048e-02 -1.7356760501861572e+00
+ <_>
+
+ 0 -1 556 -9.8580000922083855e-03
+
+ 6.7881399393081665e-01 -5.8180000633001328e-02
+ <_>
+
+ 0 -1 557 1.3481000438332558e-02
+
+ 5.7847999036312103e-02 -7.7255302667617798e-01
+ <_>
+
+ 0 -1 558 6.5609999001026154e-03
+
+ -1.3146899640560150e-01 6.7055797576904297e-01
+ <_>
+
+ 0 -1 559 7.1149999275803566e-03
+
+ -3.7880599498748779e-01 3.0978998541831970e-01
+ <_>
+
+ 0 -1 560 4.8159998841583729e-03
+
+ -5.8470398187637329e-01 2.5602099299430847e-01
+ <_>
+
+ 0 -1 561 9.5319999381899834e-03
+
+ -3.0217000842094421e-01 4.1253298521041870e-01
+ <_>
+
+ 0 -1 562 -2.7474999427795410e-02
+
+ 5.9154701232910156e-01 1.7963999882340431e-02
+ <_>
+
+ 0 -1 563 -3.9519999176263809e-02
+
+ 9.6913498640060425e-01 -2.1020300686359406e-01
+ <_>
+
+ 0 -1 564 -3.0658999457955360e-02
+
+ 9.1155898571014404e-01 4.0550000965595245e-02
+ <_>
+
+ 0 -1 565 -1.4680000022053719e-03
+
+ -6.0489797592163086e-01 1.6960899531841278e-01
+ <_>
+
+ 0 -1 566 1.9077600538730621e-01
+
+ 4.3515000492334366e-02 8.1892901659011841e-01
+ <_>
+
+ 0 -1 567 5.1790000870823860e-03
+
+ -9.3617302179336548e-01 2.4937000125646591e-02
+ <_>
+
+ 0 -1 568 2.4126000702381134e-02
+
+ 1.8175500631332397e-01 -3.4185901284217834e-01
+ <_>
+
+ 0 -1 569 -2.6383999735116959e-02
+
+ -1.2912579774856567e+00 -3.4280000254511833e-03
+ <_>
+
+ 0 -1 570 5.4139997810125351e-03
+
+ -4.6291999518871307e-02 2.5269600749015808e-01
+ <_>
+
+ 0 -1 571 5.4216001182794571e-02
+
+ -1.2848000042140484e-02 -1.4304540157318115e+00
+ <_>
+
+ 0 -1 572 2.3799999326001853e-04
+
+ -2.6676699519157410e-01 3.3588299155235291e-01
+ <_>
+
+ 0 -1 573 1.5216999687254429e-02
+
+ -5.1367300748825073e-01 1.3005100190639496e-01
+ <_>
+
+ 0 -1 574 1.7007999122142792e-02
+
+ 4.1575899720191956e-01 -3.1241199374198914e-01
+ <_>
+
+ 0 -1 575 3.0496999621391296e-02
+
+ -2.4820999801158905e-01 7.0828497409820557e-01
+ <_>
+
+ 0 -1 576 6.5430002287030220e-03
+
+ -2.2637000679969788e-01 1.9184599816799164e-01
+ <_>
+
+ 0 -1 577 1.4163999259471893e-01
+
+ 6.5227001905441284e-02 -8.8809502124786377e-01
+ <_>
+
+ 0 -1 578 1.9338000565767288e-02
+
+ 1.8891200423240662e-01 -2.7397701144218445e-01
+ <_>
+
+ 0 -1 579 -1.7324000597000122e-02
+
+ -9.4866698980331421e-01 2.4196999147534370e-02
+ <_>
+
+ 0 -1 580 -6.2069999985396862e-03
+
+ 3.6938399076461792e-01 -1.7494900524616241e-01
+ <_>
+
+ 0 -1 581 -1.6109000891447067e-02
+
+ 9.6159499883651733e-01 -2.0005300641059875e-01
+ <_>
+
+ 0 -1 582 -1.0122500360012054e-01
+
+ -3.0699110031127930e+00 1.1363799870014191e-01
+ <_>
+
+ 0 -1 583 -7.5509999878704548e-03
+
+ 2.2921000421047211e-01 -4.5645099878311157e-01
+ <_>
+
+ 0 -1 584 4.4247999787330627e-02
+
+ -3.1599999056197703e-04 3.9225301146507263e-01
+ <_>
+
+ 0 -1 585 -1.1636000126600266e-01
+
+ 9.5233702659606934e-01 -2.0201599597930908e-01
+ <_>
+
+ 0 -1 586 4.7360002063214779e-03
+
+ -9.9177002906799316e-02 2.0370499789714813e-01
+ <_>
+
+ 0 -1 587 2.2459000349044800e-02
+
+ 8.7280003353953362e-03 -1.0217070579528809e+00
+ <_>
+
+ 0 -1 588 -1.2109000235795975e-02
+
+ 6.4812600612640381e-01 -9.0149000287055969e-02
+ <_>
+
+ 0 -1 589 5.6120000779628754e-02
+
+ -3.6759998649358749e-02 -1.9275590181350708e+00
+ <_>
+
+ 0 -1 590 -8.7379999458789825e-03
+
+ 6.9261300563812256e-01 -6.8374998867511749e-02
+ <_>
+
+ 0 -1 591 6.6399998031556606e-03
+
+ -4.0569800138473511e-01 1.8625700473785400e-01
+ <_>
+
+ 0 -1 592 -1.8131999298930168e-02
+
+ -6.4518201351165771e-01 2.1976399421691895e-01
+ <_>
+
+ 0 -1 593 -2.2718999534845352e-02
+
+ 9.7776198387145996e-01 -1.8654300272464752e-01
+ <_>
+
+ 0 -1 594 1.2705000117421150e-02
+
+ -1.0546600073575974e-01 3.7404099106788635e-01
+ <_>
+
+ 0 -1 595 -1.3682999648153782e-02
+
+ 6.1064100265502930e-01 -2.6881098747253418e-01
+ <_>
+ 115
+ -3.7160909175872803e+00
+
+ <_>
+
+ 0 -1 596 3.1357999891042709e-02
+
+ -1.0183910131454468e+00 5.7528597116470337e-01
+ <_>
+
+ 0 -1 597 9.3050003051757812e-02
+
+ -4.1297501325607300e-01 1.0091199874877930e+00
+ <_>
+
+ 0 -1 598 2.5949999690055847e-02
+
+ -5.8587902784347534e-01 5.6606197357177734e-01
+ <_>
+
+ 0 -1 599 1.6472000628709793e-02
+
+ -9.2857497930526733e-01 3.0924499034881592e-01
+ <_>
+
+ 0 -1 600 -1.8779999809339643e-03
+
+ 1.1951000243425369e-01 -1.1180130243301392e+00
+ <_>
+
+ 0 -1 601 -9.0129999443888664e-03
+
+ -5.7849502563476562e-01 3.3154401183128357e-01
+ <_>
+
+ 0 -1 602 2.2547999396920204e-02
+
+ -3.8325101137161255e-01 5.2462202310562134e-01
+ <_>
+
+ 0 -1 603 -3.7780001759529114e-02
+
+ 1.1790670156478882e+00 -3.4166999161243439e-02
+ <_>
+
+ 0 -1 604 -5.3799999877810478e-03
+
+ -8.6265897750854492e-01 1.1867900192737579e-01
+ <_>
+
+ 0 -1 605 -2.3893000558018684e-02
+
+ -7.4950599670410156e-01 2.1011400222778320e-01
+ <_>
+
+ 0 -1 606 -2.6521999388933182e-02
+
+ 9.2128598690032959e-01 -2.8252801299095154e-01
+ <_>
+
+ 0 -1 607 1.2280000373721123e-02
+
+ 2.6662799715995789e-01 -7.0013600587844849e-01
+ <_>
+
+ 0 -1 608 9.6594996750354767e-02
+
+ -2.8453999757766724e-01 7.3168998956680298e-01
+ <_>
+
+ 0 -1 609 -2.7414999902248383e-02
+
+ -6.1492699384689331e-01 1.5576200187206268e-01
+ <_>
+
+ 0 -1 610 -1.5767000615596771e-02
+
+ 5.7551199197769165e-01 -3.4362199902534485e-01
+ <_>
+
+ 0 -1 611 -2.1100000012665987e-03
+
+ 3.2599699497222900e-01 -1.3008299469947815e-01
+ <_>
+
+ 0 -1 612 1.2006999924778938e-02
+
+ 8.9322999119758606e-02 -9.6025598049163818e-01
+ <_>
+
+ 0 -1 613 -1.5421999618411064e-02
+
+ 3.4449499845504761e-01 -4.6711999177932739e-01
+ <_>
+
+ 0 -1 614 -4.1579999960958958e-03
+
+ 2.3696300387382507e-01 -5.2563297748565674e-01
+ <_>
+
+ 0 -1 615 -2.1185999736189842e-02
+
+ -7.4267697334289551e-01 2.1702000498771667e-01
+ <_>
+
+ 0 -1 616 -1.7077000811696053e-02
+
+ -9.0471798181533813e-01 6.6012002527713776e-02
+ <_>
+
+ 0 -1 617 -4.0849998593330383e-02
+
+ -3.4446600079536438e-01 2.1503700315952301e-01
+ <_>
+
+ 0 -1 618 -8.1930002197623253e-03
+
+ -9.3388599157333374e-01 5.0471000373363495e-02
+ <_>
+
+ 0 -1 619 -1.9238000735640526e-02
+
+ -5.3203701972961426e-01 1.7240600287914276e-01
+ <_>
+
+ 0 -1 620 -4.4192001223564148e-02
+
+ 9.2075002193450928e-01 -2.2148500382900238e-01
+ <_>
+
+ 0 -1 621 -6.2392000108957291e-02
+
+ -7.1053802967071533e-01 1.8323899805545807e-01
+ <_>
+
+ 0 -1 622 -1.0079999919980764e-03
+
+ -8.7063097953796387e-01 5.5330000817775726e-02
+ <_>
+
+ 0 -1 623 2.3870000615715981e-02
+
+ -2.2854200005531311e-01 5.2415597438812256e-01
+ <_>
+
+ 0 -1 624 2.1391000598669052e-02
+
+ -3.0325898528099060e-01 5.5860602855682373e-01
+ <_>
+
+ 0 -1 625 2.0254999399185181e-02
+
+ 2.6901501417160034e-01 -7.0261800289154053e-01
+ <_>
+
+ 0 -1 626 -2.8772000223398209e-02
+
+ -1.1835030317306519e+00 4.6512000262737274e-02
+ <_>
+
+ 0 -1 627 3.4199999645352364e-03
+
+ -5.4652100801467896e-01 2.5962498784065247e-01
+ <_>
+
+ 0 -1 628 5.6983001530170441e-02
+
+ -2.6982900500297546e-01 5.8170700073242188e-01
+ <_>
+
+ 0 -1 629 -9.3892000615596771e-02
+
+ -9.1046398878097534e-01 1.9677700102329254e-01
+ <_>
+
+ 0 -1 630 1.7699999734759331e-02
+
+ -4.4003298878669739e-01 2.1349500119686127e-01
+ <_>
+
+ 0 -1 631 2.2844199836254120e-01
+
+ 2.3605000227689743e-02 7.7171599864959717e-01
+ <_>
+
+ 0 -1 632 -1.8287500739097595e-01
+
+ 7.9228597879409790e-01 -2.4644799530506134e-01
+ <_>
+
+ 0 -1 633 -6.9891996681690216e-02
+
+ 8.0267798900604248e-01 -3.6072000861167908e-02
+ <_>
+
+ 0 -1 634 1.5297000296413898e-02
+
+ -2.0072300732135773e-01 1.1030600070953369e+00
+ <_>
+
+ 0 -1 635 6.7500001750886440e-03
+
+ -4.5967999845743179e-02 7.2094500064849854e-01
+ <_>
+
+ 0 -1 636 -1.5983000397682190e-02
+
+ -9.0357202291488647e-01 4.4987998902797699e-02
+ <_>
+
+ 0 -1 637 1.3088000006973743e-02
+
+ 3.5297098755836487e-01 -3.7710601091384888e-01
+ <_>
+
+ 0 -1 638 1.3061000034213066e-02
+
+ -1.9583599269390106e-01 1.1198940277099609e+00
+ <_>
+
+ 0 -1 639 -3.9907000958919525e-02
+
+ -1.3998429775238037e+00 1.9145099818706512e-01
+ <_>
+
+ 0 -1 640 1.5026999637484550e-02
+
+ 2.3600000422447920e-03 -1.1611249446868896e+00
+ <_>
+
+ 0 -1 641 -2.0517999306321144e-02
+
+ -4.8908099532127380e-01 1.6743400692939758e-01
+ <_>
+
+ 0 -1 642 -2.2359000518918037e-02
+
+ -1.2202980518341064e+00 -1.1975999921560287e-02
+ <_>
+
+ 0 -1 643 -7.9150004312396049e-03
+
+ 3.7228098511695862e-01 -8.5063003003597260e-02
+ <_>
+
+ 0 -1 644 1.5258000232279301e-02
+
+ -2.9412600398063660e-01 5.9406399726867676e-01
+ <_>
+
+ 0 -1 645 -3.1665999442338943e-02
+
+ -1.4395569562911987e+00 1.3578799366950989e-01
+ <_>
+
+ 0 -1 646 -3.0773999169468880e-02
+
+ -2.2545371055603027e+00 -3.3971000462770462e-02
+ <_>
+
+ 0 -1 647 -1.5483000315725803e-02
+
+ 3.7700700759887695e-01 1.5847999602556229e-02
+ <_>
+
+ 0 -1 648 3.5167001187801361e-02
+
+ -2.9446101188659668e-01 5.3159099817276001e-01
+ <_>
+
+ 0 -1 649 -1.7906000837683678e-02
+
+ -9.9788200855255127e-01 1.6235999763011932e-01
+ <_>
+
+ 0 -1 650 -3.1799999997019768e-03
+
+ 4.7657001763582230e-02 -7.5249898433685303e-01
+ <_>
+
+ 0 -1 651 1.5720000490546227e-02
+
+ 1.4873799681663513e-01 -6.5375399589538574e-01
+ <_>
+
+ 0 -1 652 2.9864000156521797e-02
+
+ -1.4952000230550766e-02 -1.2275190353393555e+00
+ <_>
+
+ 0 -1 653 2.9899999499320984e-03
+
+ -1.4263699948787689e-01 4.3272799253463745e-01
+ <_>
+
+ 0 -1 654 8.4749996662139893e-02
+
+ -1.9280999898910522e-02 -1.1946409940719604e+00
+ <_>
+
+ 0 -1 655 -5.8724999427795410e-02
+
+ -1.7328219413757324e+00 1.4374700188636780e-01
+ <_>
+
+ 0 -1 656 4.4755998998880386e-02
+
+ -2.4140599370002747e-01 5.4019999504089355e-01
+ <_>
+
+ 0 -1 657 4.0369000285863876e-02
+
+ 5.7680001482367516e-03 5.6578099727630615e-01
+ <_>
+
+ 0 -1 658 3.7735998630523682e-02
+
+ 3.8180999457836151e-02 -7.9370397329330444e-01
+ <_>
+
+ 0 -1 659 6.0752999037504196e-02
+
+ 7.6453000307083130e-02 1.4813209772109985e+00
+ <_>
+
+ 0 -1 660 -1.9832000136375427e-02
+
+ -1.6971720457077026e+00 -2.7370000258088112e-02
+ <_>
+
+ 0 -1 661 -1.6592699289321899e-01
+
+ 6.2976002693176270e-01 3.1762998551130295e-02
+ <_>
+
+ 0 -1 662 6.9014996290206909e-02
+
+ -3.3463200926780701e-01 3.0076700448989868e-01
+ <_>
+
+ 0 -1 663 1.1358000338077545e-02
+
+ 2.2741499543190002e-01 -3.8224700093269348e-01
+ <_>
+
+ 0 -1 664 1.7000000225380063e-03
+
+ 1.9223800301551819e-01 -5.2735102176666260e-01
+ <_>
+
+ 0 -1 665 7.9769000411033630e-02
+
+ 9.1491997241973877e-02 2.1049048900604248e+00
+ <_>
+
+ 0 -1 666 -5.7144001126289368e-02
+
+ -1.7452130317687988e+00 -4.0910001844167709e-02
+ <_>
+
+ 0 -1 667 7.3830001056194305e-03
+
+ -2.4214799702167511e-01 3.5577800869941711e-01
+ <_>
+
+ 0 -1 668 -1.8040999770164490e-02
+
+ 1.1779999732971191e+00 -1.7676700651645660e-01
+ <_>
+
+ 0 -1 669 9.4503000378608704e-02
+
+ 1.3936099410057068e-01 -1.2993700504302979e+00
+ <_>
+
+ 0 -1 670 5.4210000671446323e-03
+
+ -5.4608601331710815e-01 1.3916400074958801e-01
+ <_>
+
+ 0 -1 671 7.0290002040565014e-03
+
+ -2.1597200632095337e-01 3.9258098602294922e-01
+ <_>
+
+ 0 -1 672 3.4515999257564545e-02
+
+ 6.3188999891281128e-02 -7.2108101844787598e-01
+ <_>
+
+ 0 -1 673 -5.1924999803304672e-02
+
+ 6.8667602539062500e-01 6.3272997736930847e-02
+ <_>
+
+ 0 -1 674 -6.9162003695964813e-02
+
+ 1.7411810159683228e+00 -1.6619299352169037e-01
+ <_>
+
+ 0 -1 675 -5.5229999125003815e-03
+
+ 3.0694699287414551e-01 -1.6662900149822235e-01
+ <_>
+
+ 0 -1 676 6.8599998950958252e-02
+
+ -2.1405400335788727e-01 7.3185002803802490e-01
+ <_>
+
+ 0 -1 677 -6.7038998007774353e-02
+
+ -7.9360598325729370e-01 2.0525799691677094e-01
+ <_>
+
+ 0 -1 678 -2.1005000919103622e-02
+
+ 3.7344399094581604e-01 -2.9618600010871887e-01
+ <_>
+
+ 0 -1 679 2.0278999581933022e-02
+
+ -1.5200000256299973e-02 4.0555301308631897e-01
+ <_>
+
+ 0 -1 680 -4.7107998281717300e-02
+
+ 1.2116849422454834e+00 -1.7464299499988556e-01
+ <_>
+
+ 0 -1 681 1.8768499791622162e-01
+
+ -2.2909000515937805e-02 6.9645798206329346e-01
+ <_>
+
+ 0 -1 682 -4.3228998780250549e-02
+
+ -1.0602480173110962e+00 -5.5599998449906707e-04
+ <_>
+
+ 0 -1 683 2.0004000514745712e-02
+
+ -3.2751001417636871e-02 5.3805100917816162e-01
+ <_>
+
+ 0 -1 684 8.0880001187324524e-03
+
+ 3.7548001855611801e-02 -7.4768900871276855e-01
+ <_>
+
+ 0 -1 685 2.7101000770926476e-02
+
+ -8.1790000200271606e-02 3.3387100696563721e-01
+ <_>
+
+ 0 -1 686 -9.1746002435684204e-02
+
+ -1.9213509559631348e+00 -3.8952998816967010e-02
+ <_>
+
+ 0 -1 687 -1.2454999610781670e-02
+
+ 4.8360601067543030e-01 1.8168000504374504e-02
+ <_>
+
+ 0 -1 688 1.4649000018835068e-02
+
+ -1.9906699657440186e-01 7.2815400362014771e-01
+ <_>
+
+ 0 -1 689 2.9101999476552010e-02
+
+ 1.9871099293231964e-01 -4.9216800928115845e-01
+ <_>
+
+ 0 -1 690 8.7799998000264168e-03
+
+ -1.9499599933624268e-01 7.7317398786544800e-01
+ <_>
+
+ 0 -1 691 -5.4740000516176224e-02
+
+ 1.8087190389633179e+00 6.8323001265525818e-02
+ <_>
+
+ 0 -1 692 -1.4798000454902649e-02
+
+ 7.8064900636672974e-01 -1.8709599971771240e-01
+ <_>
+
+ 0 -1 693 2.5012999773025513e-02
+
+ 1.5285299718379974e-01 -1.6021020412445068e+00
+ <_>
+
+ 0 -1 694 4.6548001468181610e-02
+
+ -1.6738200187683105e-01 1.1902060508728027e+00
+ <_>
+
+ 0 -1 695 1.7624000087380409e-02
+
+ -1.0285499691963196e-01 3.9175900816917419e-01
+ <_>
+
+ 0 -1 696 1.6319599747657776e-01
+
+ -3.5624001175165176e-02 -1.6098170280456543e+00
+ <_>
+
+ 0 -1 697 1.3137999922037125e-02
+
+ -5.6359000504016876e-02 5.4158902168273926e-01
+ <_>
+
+ 0 -1 698 -1.5665000304579735e-02
+
+ 2.8063100576400757e-01 -3.1708601117134094e-01
+ <_>
+
+ 0 -1 699 8.0554001033306122e-02
+
+ 1.2640400230884552e-01 -1.0297529697418213e+00
+ <_>
+
+ 0 -1 700 3.5363998264074326e-02
+
+ 2.0752999931573868e-02 -7.9105597734451294e-01
+ <_>
+
+ 0 -1 701 3.2986998558044434e-02
+
+ 1.9057099521160126e-01 -8.3839899301528931e-01
+ <_>
+
+ 0 -1 702 1.2195000424981117e-02
+
+ 7.3729000985622406e-02 -6.2780702114105225e-01
+ <_>
+
+ 0 -1 703 4.3065998703241348e-02
+
+ 4.7384999692440033e-02 1.5712939500808716e+00
+ <_>
+
+ 0 -1 704 3.0326999723911285e-02
+
+ -2.7314600348472595e-01 3.8572001457214355e-01
+ <_>
+
+ 0 -1 705 3.5493001341819763e-02
+
+ 5.4593998938798904e-02 5.2583402395248413e-01
+ <_>
+
+ 0 -1 706 -1.4596999622881413e-02
+
+ 3.8152599334716797e-01 -2.8332400321960449e-01
+ <_>
+
+ 0 -1 707 1.2606999836862087e-02
+
+ 1.5455099940299988e-01 -3.0501499772071838e-01
+ <_>
+
+ 0 -1 708 1.0172000154852867e-02
+
+ 2.3637000471353531e-02 -8.7217897176742554e-01
+ <_>
+
+ 0 -1 709 2.8843000531196594e-02
+
+ 1.6090999543666840e-01 -2.0277599990367889e-01
+ <_>
+
+ 0 -1 710 5.5100000463426113e-04
+
+ -6.1545401811599731e-01 8.0935999751091003e-02
+ <_>
+ 127
+ -3.5645289421081543e+00
+
+ <_>
+
+ 0 -1 711 4.8344001173973083e-02
+
+ -8.4904599189758301e-01 5.6974399089813232e-01
+ <_>
+
+ 0 -1 712 3.2460000365972519e-02
+
+ -8.1417298316955566e-01 4.4781699776649475e-01
+ <_>
+
+ 0 -1 713 3.3339999616146088e-02
+
+ -3.6423799395561218e-01 6.7937397956848145e-01
+ <_>
+
+ 0 -1 714 6.4019998535513878e-03
+
+ -1.1885459423065186e+00 1.9238699972629547e-01
+ <_>
+
+ 0 -1 715 -5.6889997795224190e-03
+
+ 3.3085298538208008e-01 -7.1334099769592285e-01
+ <_>
+
+ 0 -1 716 1.2698000296950340e-02
+
+ -5.0990802049636841e-01 1.1376299709081650e-01
+ <_>
+
+ 0 -1 717 6.0549997724592686e-03
+
+ -1.0470550060272217e+00 2.0222599804401398e-01
+ <_>
+
+ 0 -1 718 2.6420000940561295e-03
+
+ -5.0559401512145996e-01 3.6441200971603394e-01
+ <_>
+
+ 0 -1 719 -1.6925999894738197e-02
+
+ -9.9541902542114258e-01 1.2602199614048004e-01
+ <_>
+
+ 0 -1 720 2.8235999867320061e-02
+
+ -9.4137996435165405e-02 5.7780402898788452e-01
+ <_>
+
+ 0 -1 721 1.0428999550640583e-02
+
+ 2.3272900283336639e-01 -5.2569699287414551e-01
+ <_>
+
+ 0 -1 722 9.8860003054141998e-03
+
+ -1.0316299647092819e-01 4.7657600045204163e-01
+ <_>
+
+ 0 -1 723 2.6015000417828560e-02
+
+ -1.0920000495389104e-03 -1.5581729412078857e+00
+ <_>
+
+ 0 -1 724 -2.5537999346852303e-02
+
+ -6.5451401472091675e-01 1.8843199312686920e-01
+ <_>
+
+ 0 -1 725 -3.5310001112520695e-03
+
+ 2.8140598535537720e-01 -4.4575300812721252e-01
+ <_>
+
+ 0 -1 726 9.2449998483061790e-03
+
+ 1.5612000226974487e-01 -2.1370999515056610e-01
+ <_>
+
+ 0 -1 727 2.1030999720096588e-02
+
+ -2.9170298576354980e-01 5.2234101295471191e-01
+ <_>
+
+ 0 -1 728 -5.1063001155853271e-02
+
+ 1.3661290407180786e+00 3.0465999618172646e-02
+ <_>
+
+ 0 -1 729 -6.2330000102519989e-02
+
+ 1.2207020521163940e+00 -2.2434400022029877e-01
+ <_>
+
+ 0 -1 730 -3.2963000237941742e-02
+
+ -8.2016801834106445e-01 1.4531899988651276e-01
+ <_>
+
+ 0 -1 731 -3.7418000400066376e-02
+
+ -1.2218099832534790e+00 1.9448999315500259e-02
+ <_>
+
+ 0 -1 732 1.2402799725532532e-01
+
+ 1.2082300335168839e-01 -9.8729300498962402e-01
+ <_>
+
+ 0 -1 733 -8.9229997247457504e-03
+
+ -1.1688489913940430e+00 2.1105000749230385e-02
+ <_>
+
+ 0 -1 734 -5.9879999607801437e-02
+
+ -1.0689330101013184e+00 1.9860200583934784e-01
+ <_>
+
+ 0 -1 735 6.2620001845061779e-03
+
+ -3.6229598522186279e-01 3.8000801205635071e-01
+ <_>
+
+ 0 -1 736 -1.7673000693321228e-02
+
+ 4.9094098806381226e-01 -1.4606699347496033e-01
+ <_>
+
+ 0 -1 737 1.7579000443220139e-02
+
+ 5.8728098869323730e-01 -2.7774399518966675e-01
+ <_>
+
+ 0 -1 738 5.1560001447796822e-03
+
+ -7.5194999575614929e-02 6.0193097591400146e-01
+ <_>
+
+ 0 -1 739 -1.0599999688565731e-02
+
+ 2.7637401223182678e-01 -3.7794300913810730e-01
+ <_>
+
+ 0 -1 740 2.0884099602699280e-01
+
+ -5.3599998354911804e-03 1.0317809581756592e+00
+ <_>
+
+ 0 -1 741 -2.6412999257445335e-02
+
+ 8.2336401939392090e-01 -2.2480599582195282e-01
+ <_>
+
+ 0 -1 742 5.8892000466585159e-02
+
+ 1.3098299503326416e-01 -1.1853699684143066e+00
+ <_>
+
+ 0 -1 743 -1.1579000391066074e-02
+
+ -9.0667802095413208e-01 4.4126998633146286e-02
+ <_>
+
+ 0 -1 744 4.5988000929355621e-02
+
+ 1.0143999941647053e-02 1.0740900039672852e+00
+ <_>
+
+ 0 -1 745 -2.2838000208139420e-02
+
+ 1.7791990041732788e+00 -1.7315499484539032e-01
+ <_>
+
+ 0 -1 746 -8.1709995865821838e-03
+
+ 5.7386302947998047e-01 -7.4106000363826752e-02
+ <_>
+
+ 0 -1 747 3.5359999164938927e-03
+
+ -3.2072898745536804e-01 4.0182501077651978e-01
+ <_>
+
+ 0 -1 748 4.9444999545812607e-02
+
+ 1.9288000464439392e-01 -1.2166700363159180e+00
+ <_>
+
+ 0 -1 749 3.5139999818056822e-03
+
+ 6.9568000733852386e-02 -7.1323698759078979e-01
+ <_>
+
+ 0 -1 750 -3.0996000394225121e-02
+
+ -3.8862198591232300e-01 1.8098799884319305e-01
+ <_>
+
+ 0 -1 751 8.6452998220920563e-02
+
+ -2.5792999193072319e-02 -1.5453219413757324e+00
+ <_>
+
+ 0 -1 752 -1.3652600347995758e-01
+
+ -1.9199420213699341e+00 1.6613300144672394e-01
+ <_>
+
+ 0 -1 753 -5.7689999230206013e-03
+
+ -1.2822589874267578e+00 -1.5907999128103256e-02
+ <_>
+
+ 0 -1 754 -1.7899999395012856e-02
+
+ -4.0409898757934570e-01 2.3591600358486176e-01
+ <_>
+
+ 0 -1 755 -1.9969999790191650e-02
+
+ -7.2891902923583984e-01 5.6235000491142273e-02
+ <_>
+
+ 0 -1 756 -5.7493001222610474e-02
+
+ 5.7830798625946045e-01 -1.5796000137925148e-02
+ <_>
+
+ 0 -1 757 -8.3056002855300903e-02
+
+ 9.1511601209640503e-01 -2.1121400594711304e-01
+ <_>
+
+ 0 -1 758 -5.3771000355482101e-02
+
+ -5.1931297779083252e-01 1.8576000630855560e-01
+ <_>
+
+ 0 -1 759 -8.3670001477003098e-03
+
+ 2.4109700322151184e-01 -3.9648601412773132e-01
+ <_>
+
+ 0 -1 760 5.5406998842954636e-02
+
+ 1.6771200299263000e-01 -2.5664970874786377e+00
+ <_>
+
+ 0 -1 761 -6.7180998623371124e-02
+
+ -1.3658570051193237e+00 -1.4232000336050987e-02
+ <_>
+
+ 0 -1 762 -2.3900000378489494e-02
+
+ -1.7084569931030273e+00 1.6507799923419952e-01
+ <_>
+
+ 0 -1 763 5.5949999950826168e-03
+
+ -3.1373998522758484e-01 3.2837900519371033e-01
+ <_>
+
+ 0 -1 764 2.1294999867677689e-02
+
+ 1.4953400194644928e-01 -4.8579800128936768e-01
+ <_>
+
+ 0 -1 765 -2.4613000452518463e-02
+
+ 7.4346399307250977e-01 -2.2305199503898621e-01
+ <_>
+
+ 0 -1 766 -1.9626000896096230e-02
+
+ -4.0918299555778503e-01 1.8893200159072876e-01
+ <_>
+
+ 0 -1 767 -5.3266000002622604e-02
+
+ 8.1381601095199585e-01 -2.0853699743747711e-01
+ <_>
+
+ 0 -1 768 7.1290000341832638e-03
+
+ 3.2996100187301636e-01 -5.9937399625778198e-01
+ <_>
+
+ 0 -1 769 -2.2486999630928040e-02
+
+ -1.2551610469818115e+00 -2.0413000136613846e-02
+ <_>
+
+ 0 -1 770 -8.2310996949672699e-02
+
+ 1.3821430206298828e+00 5.9308998286724091e-02
+ <_>
+
+ 0 -1 771 1.3097000122070312e-01
+
+ -3.5843998193740845e-02 -1.5396369695663452e+00
+ <_>
+
+ 0 -1 772 1.4293000102043152e-02
+
+ -1.8475200235843658e-01 3.7455001473426819e-01
+ <_>
+
+ 0 -1 773 6.3479999080300331e-03
+
+ -4.4901099801063538e-01 1.3876999914646149e-01
+ <_>
+
+ 0 -1 774 -4.6055000275373459e-02
+
+ 6.7832601070404053e-01 -1.7071999609470367e-02
+ <_>
+
+ 0 -1 775 5.7693999260663986e-02
+
+ -1.1955999769270420e-02 -1.2261159420013428e+00
+ <_>
+
+ 0 -1 776 -6.0609998181462288e-03
+
+ 3.3958598971366882e-01 6.2800000887364149e-04
+ <_>
+
+ 0 -1 777 -5.2163001149892807e-02
+
+ -1.0621069669723511e+00 -1.3779999688267708e-02
+ <_>
+
+ 0 -1 778 4.6572998166084290e-02
+
+ 1.4538800716400146e-01 -1.2384550571441650e+00
+ <_>
+
+ 0 -1 779 7.5309998355805874e-03
+
+ -2.4467700719833374e-01 5.1377099752426147e-01
+ <_>
+
+ 0 -1 780 2.1615000441670418e-02
+
+ 1.3072599470615387e-01 -7.0996797084808350e-01
+ <_>
+
+ 0 -1 781 -1.7864000052213669e-02
+
+ -1.0474660396575928e+00 4.9599999329075217e-04
+ <_>
+
+ 0 -1 782 -3.7195000797510147e-02
+
+ -1.5126730203628540e+00 1.4801399409770966e-01
+ <_>
+
+ 0 -1 783 -3.1100001069717109e-04
+
+ 1.3971500098705292e-01 -4.6867498755455017e-01
+ <_>
+
+ 0 -1 784 2.5042999535799026e-02
+
+ 2.8632000088691711e-01 -4.1794699430465698e-01
+ <_>
+
+ 0 -1 785 9.3449996784329414e-03
+
+ -2.7336201071739197e-01 4.3444699048995972e-01
+ <_>
+
+ 0 -1 786 3.2363999634981155e-02
+
+ 1.8438899517059326e-01 -9.5019298791885376e-01
+ <_>
+
+ 0 -1 787 -6.2299999408423901e-03
+
+ 3.2581999897956848e-01 -3.0815601348876953e-01
+ <_>
+
+ 0 -1 788 5.1488999277353287e-02
+
+ 1.1416000127792358e-01 -1.9795479774475098e+00
+ <_>
+
+ 0 -1 789 -2.6449000462889671e-02
+
+ -1.1067299842834473e+00 -8.5519999265670776e-03
+ <_>
+
+ 0 -1 790 -1.5420000068843365e-02
+
+ 8.0138701200485229e-01 -3.2035000622272491e-02
+ <_>
+
+ 0 -1 791 1.9456999376416206e-02
+
+ -2.6449498534202576e-01 3.8753899931907654e-01
+ <_>
+
+ 0 -1 792 3.3620998263359070e-02
+
+ 1.6052000224590302e-02 5.8840900659561157e-01
+ <_>
+
+ 0 -1 793 2.8906000778079033e-02
+
+ 1.5216000378131866e-02 -9.4723600149154663e-01
+ <_>
+
+ 0 -1 794 2.0300000323913991e-04
+
+ -3.0766001343727112e-01 2.1235899627208710e-01
+ <_>
+
+ 0 -1 795 -4.9141999334096909e-02
+
+ -1.6058609485626221e+00 -3.1094999983906746e-02
+ <_>
+
+ 0 -1 796 7.6425999402999878e-02
+
+ 7.4758999049663544e-02 1.1639410257339478e+00
+ <_>
+
+ 0 -1 797 2.3897999897599220e-02
+
+ -6.4320000819861889e-03 -1.1150749921798706e+00
+ <_>
+
+ 0 -1 798 3.8970001041889191e-03
+
+ -2.4105699360370636e-01 2.0858900249004364e-01
+ <_>
+
+ 0 -1 799 -8.9445002377033234e-02
+
+ 1.9157789945602417e+00 -1.5721100568771362e-01
+ <_>
+
+ 0 -1 800 -1.5008999966084957e-02
+
+ -2.5174099206924438e-01 1.8179899454116821e-01
+ <_>
+
+ 0 -1 801 -1.1145999655127525e-02
+
+ -6.9349497556686401e-01 4.4927999377250671e-02
+ <_>
+
+ 0 -1 802 9.4578996300697327e-02
+
+ 1.8102100491523743e-01 -7.4978601932525635e-01
+ <_>
+
+ 0 -1 803 5.5038899183273315e-01
+
+ -3.0974000692367554e-02 -1.6746139526367188e+00
+ <_>
+
+ 0 -1 804 4.1381001472473145e-02
+
+ 6.3910000026226044e-02 7.6561200618743896e-01
+ <_>
+
+ 0 -1 805 2.4771999567747116e-02
+
+ 1.1380000039935112e-02 -8.8559401035308838e-01
+ <_>
+
+ 0 -1 806 5.0999000668525696e-02
+
+ 1.4890299737453461e-01 -2.4634211063385010e+00
+ <_>
+
+ 0 -1 807 -1.6893999651074409e-02
+
+ 3.8870999217033386e-01 -2.9880300164222717e-01
+ <_>
+
+ 0 -1 808 -1.2162300199270248e-01
+
+ -1.5542800426483154e+00 1.6300800442695618e-01
+ <_>
+
+ 0 -1 809 -3.6049999762326479e-03
+
+ 2.1842800080776215e-01 -3.7312099337577820e-01
+ <_>
+
+ 0 -1 810 1.1575400084257126e-01
+
+ -4.7061000019311905e-02 5.9403699636459351e-01
+ <_>
+
+ 0 -1 811 3.6903999745845795e-02
+
+ -2.5508600473403931e-01 5.5397301912307739e-01
+ <_>
+
+ 0 -1 812 1.1483999900519848e-02
+
+ -1.8129499256610870e-01 4.0682798624038696e-01
+ <_>
+
+ 0 -1 813 -2.0233999937772751e-02
+
+ 5.4311197996139526e-01 -2.3822399973869324e-01
+ <_>
+
+ 0 -1 814 -2.8765000402927399e-02
+
+ -6.9172298908233643e-01 1.5943300724029541e-01
+ <_>
+
+ 0 -1 815 -5.8320001699030399e-03
+
+ 2.9447799921035767e-01 -3.4005999565124512e-01
+ <_>
+
+ 0 -1 816 -5.5468998849391937e-02
+
+ 9.2200797796249390e-01 9.4093002378940582e-02
+ <_>
+
+ 0 -1 817 -1.4801000244915485e-02
+
+ -7.9539698362350464e-01 3.1521998345851898e-02
+ <_>
+
+ 0 -1 818 -7.0940000005066395e-03
+
+ 3.3096000552177429e-01 -5.0886999815702438e-02
+ <_>
+
+ 0 -1 819 -4.5124001801013947e-02
+
+ -1.3719749450683594e+00 -2.1408999338746071e-02
+ <_>
+
+ 0 -1 820 6.4377002418041229e-02
+
+ 6.3901998102664948e-02 9.1478300094604492e-01
+ <_>
+
+ 0 -1 821 -1.4727000147104263e-02
+
+ 3.6050599813461304e-01 -2.8614500164985657e-01
+ <_>
+
+ 0 -1 822 4.5007001608610153e-02
+
+ -1.5619699656963348e-01 5.3160297870635986e-01
+ <_>
+
+ 0 -1 823 -1.1330000124871731e-03
+
+ 1.3422900438308716e-01 -4.4358900189399719e-01
+ <_>
+
+ 0 -1 824 4.9451000988483429e-02
+
+ 1.0571800172328949e-01 -2.5589139461517334e+00
+ <_>
+
+ 0 -1 825 2.9102999716997147e-02
+
+ -1.0088000446557999e-02 -1.1073939800262451e+00
+ <_>
+
+ 0 -1 826 3.4786000847816467e-02
+
+ -2.7719999197870493e-03 5.6700998544692993e-01
+ <_>
+
+ 0 -1 827 -6.1309998854994774e-03
+
+ -4.6889400482177734e-01 1.2636399269104004e-01
+ <_>
+
+ 0 -1 828 1.5525000169873238e-02
+
+ -8.4279999136924744e-03 8.7469202280044556e-01
+ <_>
+
+ 0 -1 829 2.9249999206513166e-03
+
+ -3.4434300661087036e-01 2.0851600170135498e-01
+ <_>
+
+ 0 -1 830 -5.3571000695228577e-02
+
+ 1.4982949495315552e+00 5.7328000664710999e-02
+ <_>
+
+ 0 -1 831 -1.9217999652028084e-02
+
+ -9.9234098196029663e-01 -9.3919998034834862e-03
+ <_>
+
+ 0 -1 832 -5.5282998830080032e-02
+
+ -5.7682299613952637e-01 1.6860599815845490e-01
+ <_>
+
+ 0 -1 833 5.6336000561714172e-02
+
+ -3.3775001764297485e-02 -1.3889650106430054e+00
+ <_>
+
+ 0 -1 834 -2.3824000731110573e-02
+
+ 4.0182098746299744e-01 1.8360000103712082e-03
+ <_>
+
+ 0 -1 835 1.7810000572353601e-03
+
+ 1.8145999312400818e-01 -4.1743400692939758e-01
+ <_>
+
+ 0 -1 836 -3.7689000368118286e-02
+
+ 5.4683101177215576e-01 1.8219999969005585e-02
+ <_>
+
+ 0 -1 837 -2.4144999682903290e-02
+
+ 6.8352097272872925e-01 -1.9650200009346008e-01
+ <_>
+ 135
+ -3.7025990486145020e+00
+
+ <_>
+
+ 0 -1 838 2.7444999665021896e-02
+
+ -8.9984202384948730e-01 5.1876497268676758e-01
+ <_>
+
+ 0 -1 839 1.1554100364446640e-01
+
+ -5.6524401903152466e-01 7.0551300048828125e-01
+ <_>
+
+ 0 -1 840 -2.2297000512480736e-02
+
+ 3.6079999804496765e-01 -6.6864597797393799e-01
+ <_>
+
+ 0 -1 841 1.3325000181794167e-02
+
+ -5.5573397874832153e-01 3.5789999365806580e-01
+ <_>
+
+ 0 -1 842 -3.8060001097619534e-03
+
+ -1.0713000297546387e+00 1.8850000202655792e-01
+ <_>
+
+ 0 -1 843 -2.6819999329745770e-03
+
+ -7.1584302186965942e-01 2.6344498991966248e-01
+ <_>
+
+ 0 -1 844 3.3819999080151320e-03
+
+ -4.6930798888206482e-01 2.6658400893211365e-01
+ <_>
+
+ 0 -1 845 3.7643000483512878e-02
+
+ 2.1098700165748596e-01 -1.0804339647293091e+00
+ <_>
+
+ 0 -1 846 -1.3861999846994877e-02
+
+ 6.6912001371383667e-01 -2.7942800521850586e-01
+ <_>
+
+ 0 -1 847 -2.7350001037120819e-03
+
+ -9.5332300662994385e-01 2.4051299691200256e-01
+ <_>
+
+ 0 -1 848 -3.8336999714374542e-02
+
+ 8.1432801485061646e-01 -2.4919399619102478e-01
+ <_>
+
+ 0 -1 849 -3.4697998315095901e-02
+
+ 1.2330100536346436e+00 6.8600000813603401e-03
+ <_>
+
+ 0 -1 850 2.3360999301075935e-02
+
+ -3.0794700980186462e-01 7.0714497566223145e-01
+ <_>
+
+ 0 -1 851 3.5057999193668365e-02
+
+ 2.1205900609493256e-01 -1.4399830102920532e+00
+ <_>
+
+ 0 -1 852 -1.3256999664008617e-02
+
+ -9.0260702371597290e-01 4.8610001802444458e-02
+ <_>
+
+ 0 -1 853 1.2740000151097775e-02
+
+ 2.2655199468135834e-01 -4.4643801450729370e-01
+ <_>
+
+ 0 -1 854 3.6400000099092722e-03
+
+ -3.9817899465560913e-01 3.4665399789810181e-01
+ <_>
+
+ 0 -1 855 1.0064700245857239e-01
+
+ 1.8383599817752838e-01 -1.3410769701004028e+00
+ <_>
+
+ 0 -1 856 0.
+
+ 1.5536400675773621e-01 -5.1582497358322144e-01
+ <_>
+
+ 0 -1 857 1.1708999983966351e-02
+
+ 2.1651400625705719e-01 -7.2705197334289551e-01
+ <_>
+
+ 0 -1 858 -3.5964999347925186e-02
+
+ -1.4789500236511230e+00 -2.4317000061273575e-02
+ <_>
+
+ 0 -1 859 -2.1236000582575798e-02
+
+ -1.6844099760055542e-01 1.9526599347591400e-01
+ <_>
+
+ 0 -1 860 1.4874000102281570e-02
+
+ 3.7335999310016632e-02 -8.7557297945022583e-01
+ <_>
+
+ 0 -1 861 -5.1409997977316380e-03
+
+ 3.3466500043869019e-01 -2.4109700322151184e-01
+ <_>
+
+ 0 -1 862 2.3450000211596489e-02
+
+ 5.5320002138614655e-03 -1.2509720325469971e+00
+ <_>
+
+ 0 -1 863 -2.5062000378966331e-02
+
+ 4.5212399959564209e-01 -8.4469996392726898e-02
+ <_>
+
+ 0 -1 864 -7.7400001464411616e-04
+
+ 1.5249900519847870e-01 -4.8486500978469849e-01
+ <_>
+
+ 0 -1 865 -4.0483999997377396e-02
+
+ -1.3024920225143433e+00 1.7983500659465790e-01
+ <_>
+
+ 0 -1 866 2.8170999139547348e-02
+
+ -2.4410900473594666e-01 6.2271100282669067e-01
+ <_>
+
+ 0 -1 867 4.5692998915910721e-02
+
+ 2.8122000396251678e-02 9.2394399642944336e-01
+ <_>
+
+ 0 -1 868 3.9707001298666000e-02
+
+ -2.2332799434661865e-01 7.7674001455307007e-01
+ <_>
+
+ 0 -1 869 5.0517000257968903e-02
+
+ 2.0319999754428864e-01 -1.0895930528640747e+00
+ <_>
+
+ 0 -1 870 -1.7266999930143356e-02
+
+ 6.8598401546478271e-01 -2.3304499685764313e-01
+ <_>
+
+ 0 -1 871 8.0186001956462860e-02
+
+ -1.0292000137269497e-02 6.1881101131439209e-01
+ <_>
+
+ 0 -1 872 9.7676001489162445e-02
+
+ -2.0070299506187439e-01 1.0088349580764771e+00
+ <_>
+
+ 0 -1 873 -1.5572000294923782e-02
+
+ 4.7615298628807068e-01 4.5623999089002609e-02
+ <_>
+
+ 0 -1 874 -1.5305000357329845e-02
+
+ -1.1077369451522827e+00 4.5239999890327454e-03
+ <_>
+
+ 0 -1 875 -1.6485000029206276e-02
+
+ 1.0152939558029175e+00 1.6327999532222748e-02
+ <_>
+
+ 0 -1 876 -2.6141999289393425e-02
+
+ 4.1723299026489258e-01 -2.8645500540733337e-01
+ <_>
+
+ 0 -1 877 8.8679995387792587e-03
+
+ 2.1404999494552612e-01 -1.6772800683975220e-01
+ <_>
+
+ 0 -1 878 -2.6886999607086182e-02
+
+ -1.1564220190048218e+00 -1.0324000380933285e-02
+ <_>
+
+ 0 -1 879 7.7789998613297939e-03
+
+ 3.5359498858451843e-01 -2.9611301422119141e-01
+ <_>
+
+ 0 -1 880 -1.5974000096321106e-02
+
+ -1.5374109745025635e+00 -2.9958000406622887e-02
+ <_>
+
+ 0 -1 881 2.0866999402642250e-02
+
+ 2.0244100689888000e-01 -7.1270197629928589e-01
+ <_>
+
+ 0 -1 882 8.5482001304626465e-02
+
+ -2.5932999327778816e-02 -1.5156569480895996e+00
+ <_>
+
+ 0 -1 883 2.3872999474406242e-02
+
+ 1.6803400218486786e-01 -3.8806200027465820e-01
+ <_>
+
+ 0 -1 884 -3.9105001837015152e-02
+
+ -1.1958349943161011e+00 -2.0361000671982765e-02
+ <_>
+
+ 0 -1 885 -7.7946998178958893e-02
+
+ -1.0898950099945068e+00 1.4530299603939056e-01
+ <_>
+
+ 0 -1 886 -1.6876000910997391e-02
+
+ 2.8049701452255249e-01 -4.1336300969123840e-01
+ <_>
+
+ 0 -1 887 1.1875600367784500e-01
+
+ -4.3490998446941376e-02 4.1263699531555176e-01
+ <_>
+
+ 0 -1 888 1.5624199807643890e-01
+
+ -2.6429599523544312e-01 5.5127799510955811e-01
+ <_>
+
+ 0 -1 889 -4.5908000320196152e-02
+
+ 6.0189199447631836e-01 1.8921000882983208e-02
+ <_>
+
+ 0 -1 890 -1.0309999808669090e-02
+
+ 3.8152998685836792e-01 -2.9507899284362793e-01
+ <_>
+
+ 0 -1 891 9.5769003033638000e-02
+
+ 1.3246500492095947e-01 -4.6266800165176392e-01
+ <_>
+
+ 0 -1 892 1.3686999678611755e-02
+
+ 1.1738699674606323e-01 -5.1664102077484131e-01
+ <_>
+
+ 0 -1 893 2.3990001063793898e-03
+
+ -3.4007599949836731e-01 2.0953500270843506e-01
+ <_>
+
+ 0 -1 894 3.3264998346567154e-02
+
+ -1.7052799463272095e-01 1.4366799592971802e+00
+ <_>
+
+ 0 -1 895 -3.3206000924110413e-02
+
+ 6.1295700073242188e-01 -4.1549999266862869e-02
+ <_>
+
+ 0 -1 896 2.7979998849332333e-03
+
+ -4.8554301261901855e-01 1.3372699916362762e-01
+ <_>
+
+ 0 -1 897 -6.5792001783847809e-02
+
+ -4.0257668495178223e+00 1.0876700282096863e-01
+ <_>
+
+ 0 -1 898 2.1430000197142363e-03
+
+ -3.9179998636245728e-01 2.2427099943161011e-01
+ <_>
+
+ 0 -1 899 2.2363999858498573e-02
+
+ -8.6429998278617859e-02 3.7785199284553528e-01
+ <_>
+
+ 0 -1 900 -5.7410001754760742e-02
+
+ 1.1454069614410400e+00 -1.9736599922180176e-01
+ <_>
+
+ 0 -1 901 6.6550001502037048e-03
+
+ -2.1105000749230385e-02 5.8453398942947388e-01
+ <_>
+
+ 0 -1 902 1.2326999567449093e-02
+
+ 3.7817001342773438e-02 -6.6987001895904541e-01
+ <_>
+
+ 0 -1 903 -8.1869997084140778e-03
+
+ 5.6366002559661865e-01 -7.6877996325492859e-02
+ <_>
+
+ 0 -1 904 3.6681000143289566e-02
+
+ -1.7343300580978394e-01 1.1670149564743042e+00
+ <_>
+
+ 0 -1 905 -4.0220400691032410e-01
+
+ 1.2640819549560547e+00 4.3398998677730560e-02
+ <_>
+
+ 0 -1 906 -2.2126000374555588e-02
+
+ 6.6978102922439575e-01 -2.1605299413204193e-01
+ <_>
+
+ 0 -1 907 -1.3156999833881855e-02
+
+ -4.1198599338531494e-01 2.0215000212192535e-01
+ <_>
+
+ 0 -1 908 -1.2860000133514404e-02
+
+ -9.1582697629928589e-01 3.9232999086380005e-02
+ <_>
+
+ 0 -1 909 2.1627999842166901e-02
+
+ 3.8719999138265848e-03 3.5668200254440308e-01
+ <_>
+
+ 0 -1 910 1.1896000243723392e-02
+
+ -3.7303900718688965e-01 1.9235099852085114e-01
+ <_>
+
+ 0 -1 911 -1.9548999145627022e-02
+
+ -4.2374899983406067e-01 2.4429599940776825e-01
+ <_>
+
+ 0 -1 912 6.4444996416568756e-02
+
+ -1.6558900475502014e-01 1.2697030305862427e+00
+ <_>
+
+ 0 -1 913 1.0898499935865402e-01
+
+ 1.4894300699234009e-01 -2.1534640789031982e+00
+ <_>
+
+ 0 -1 914 -3.4077998250722885e-02
+
+ 1.3779460191726685e+00 -1.6198499500751495e-01
+ <_>
+
+ 0 -1 915 -3.7489999085664749e-03
+
+ -3.3828601241111755e-01 2.1152900159358978e-01
+ <_>
+
+ 0 -1 916 -1.0971999727189541e-02
+
+ 7.6517897844314575e-01 -1.9692599773406982e-01
+ <_>
+
+ 0 -1 917 -1.1485000140964985e-02
+
+ -6.9271200895309448e-01 2.1657100319862366e-01
+ <_>
+
+ 0 -1 918 2.5984000414609909e-02
+
+ -1.1983999982476234e-02 -9.9697297811508179e-01
+ <_>
+
+ 0 -1 919 4.2159999720752239e-03
+
+ -1.0205700248479843e-01 4.8884400725364685e-01
+ <_>
+
+ 0 -1 920 -4.7697000205516815e-02
+
+ 1.0666010379791260e+00 -1.7576299607753754e-01
+ <_>
+
+ 0 -1 921 4.0300001273863018e-04
+
+ 1.8524800240993500e-01 -7.4790000915527344e-01
+ <_>
+
+ 0 -1 922 1.1539600044488907e-01
+
+ -2.2019700706005096e-01 5.4509997367858887e-01
+ <_>
+
+ 0 -1 923 1.6021000221371651e-02
+
+ 2.5487500429153442e-01 -5.0740098953247070e-01
+ <_>
+
+ 0 -1 924 5.6632000952959061e-02
+
+ -1.1256000027060509e-02 -9.5968097448348999e-01
+ <_>
+
+ 0 -1 925 -1.0726000182330608e-02
+
+ -2.8544700145721436e-01 1.6994799673557281e-01
+ <_>
+
+ 0 -1 926 1.2420000135898590e-01
+
+ -3.6139998584985733e-02 -1.3132710456848145e+00
+ <_>
+
+ 0 -1 927 -5.3799999877810478e-03
+
+ 3.3092701435089111e-01 1.3307999819517136e-02
+ <_>
+
+ 0 -1 928 1.1908000335097313e-02
+
+ -3.4830299019813538e-01 2.4041900038719177e-01
+ <_>
+
+ 0 -1 929 -4.3007999658584595e-02
+
+ -1.4390469789505005e+00 1.5599599480628967e-01
+ <_>
+
+ 0 -1 930 -3.3149998635053635e-02
+
+ -1.1805850267410278e+00 -1.2347999960184097e-02
+ <_>
+
+ 0 -1 931 -2.1341999992728233e-02
+
+ 2.2119441032409668e+00 6.2737002968788147e-02
+ <_>
+
+ 0 -1 932 -1.2218999676406384e-02
+
+ -1.8709750175476074e+00 -4.5499999076128006e-02
+ <_>
+
+ 0 -1 933 -1.6860999166965485e-02
+
+ -7.6912701129913330e-01 1.5330000221729279e-01
+ <_>
+
+ 0 -1 934 -2.4999999441206455e-03
+
+ -6.2987399101257324e-01 5.1600001752376556e-02
+ <_>
+
+ 0 -1 935 -4.5037999749183655e-02
+
+ 8.5428899526596069e-01 6.2600001692771912e-03
+ <_>
+
+ 0 -1 936 3.9057999849319458e-02
+
+ -3.2458998262882233e-02 -1.3325669765472412e+00
+ <_>
+
+ 0 -1 937 6.6720000468194485e-03
+
+ -1.9423599541187286e-01 3.7328699231147766e-01
+ <_>
+
+ 0 -1 938 -1.6361000016331673e-02
+
+ 2.0605869293212891e+00 -1.5042699873447418e-01
+ <_>
+
+ 0 -1 939 6.1719999648630619e-03
+
+ -1.1610999703407288e-01 2.5455400347709656e-01
+ <_>
+
+ 0 -1 940 4.5722000300884247e-02
+
+ -1.6340000554919243e-02 -1.0449140071868896e+00
+ <_>
+
+ 0 -1 941 4.1209999471902847e-03
+
+ -4.1997998952865601e-02 3.9680999517440796e-01
+ <_>
+
+ 0 -1 942 -1.7800000205170363e-04
+
+ -6.6422599554061890e-01 3.3443000167608261e-02
+ <_>
+
+ 0 -1 943 7.1109998971223831e-03
+
+ -5.8231998234987259e-02 3.7857300043106079e-01
+ <_>
+
+ 0 -1 944 -4.9864001572132111e-02
+
+ 6.1019402742385864e-01 -2.1005700528621674e-01
+ <_>
+
+ 0 -1 945 -2.5011999532580376e-02
+
+ -5.7100099325180054e-01 1.7848399281501770e-01
+ <_>
+
+ 0 -1 946 3.0939999967813492e-02
+
+ 5.6363001465797424e-02 -6.4731001853942871e-01
+ <_>
+
+ 0 -1 947 4.6271000057458878e-02
+
+ 1.7482399940490723e-01 -9.8909401893615723e-01
+ <_>
+
+ 0 -1 948 -3.1870000530034304e-03
+
+ -6.6804802417755127e-01 3.2267000526189804e-02
+ <_>
+
+ 0 -1 949 -2.4351999163627625e-02
+
+ 2.9444900155067444e-01 -1.3599999947473407e-03
+ <_>
+
+ 0 -1 950 1.1974000371992588e-02
+
+ -2.8345099091529846e-01 4.7171199321746826e-01
+ <_>
+
+ 0 -1 951 1.3070000335574150e-02
+
+ -1.0834600031375885e-01 5.7193297147750854e-01
+ <_>
+
+ 0 -1 952 5.9163000434637070e-02
+
+ -5.0939001142978668e-02 -1.9059720039367676e+00
+ <_>
+
+ 0 -1 953 -4.1094999760389328e-02
+
+ 4.5104598999023438e-01 -9.7599998116493225e-03
+ <_>
+
+ 0 -1 954 -8.3989001810550690e-02
+
+ -2.0349199771881104e+00 -5.1019001752138138e-02
+ <_>
+
+ 0 -1 955 4.4619001448154449e-02
+
+ 1.7041100561618805e-01 -1.2278720140457153e+00
+ <_>
+
+ 0 -1 956 2.4419000372290611e-02
+
+ -2.1796999499201775e-02 -1.0822949409484863e+00
+ <_>
+
+ 0 -1 957 -4.3870001100003719e-03
+
+ 3.0466699600219727e-01 -3.7066599726676941e-01
+ <_>
+
+ 0 -1 958 2.4607999250292778e-02
+
+ -3.1169500946998596e-01 2.3657299578189850e-01
+ <_>
+
+ 0 -1 959 -8.5182003676891327e-02
+
+ -1.7982350587844849e+00 1.5254299342632294e-01
+ <_>
+
+ 0 -1 960 2.1844999864697456e-02
+
+ -5.1888000220060349e-02 -1.9017189741134644e+00
+ <_>
+
+ 0 -1 961 -1.6829000785946846e-02
+
+ 2.1025900542736053e-01 2.1656999364495277e-02
+ <_>
+
+ 0 -1 962 3.2547999173402786e-02
+
+ -2.0292599499225616e-01 6.0944002866744995e-01
+ <_>
+
+ 0 -1 963 2.4709999561309814e-03
+
+ -9.5371198654174805e-01 1.8568399548530579e-01
+ <_>
+
+ 0 -1 964 5.5415999144315720e-02
+
+ -1.4405299723148346e-01 2.1506340503692627e+00
+ <_>
+
+ 0 -1 965 -1.0635499656200409e-01
+
+ -1.0911970138549805e+00 1.3228000700473785e-01
+ <_>
+
+ 0 -1 966 -7.9889995977282524e-03
+
+ 1.0253400355577469e-01 -5.1744902133941650e-01
+ <_>
+
+ 0 -1 967 7.5567997992038727e-02
+
+ 5.8965001255273819e-02 1.2354209423065186e+00
+ <_>
+
+ 0 -1 968 -9.2805996537208557e-02
+
+ -1.3431650400161743e+00 -3.4462999552488327e-02
+ <_>
+
+ 0 -1 969 4.9431998282670975e-02
+
+ 4.9601998180150986e-02 1.6054730415344238e+00
+ <_>
+
+ 0 -1 970 -1.1772999539971352e-02
+
+ -1.0261050462722778e+00 -4.1559999808669090e-03
+ <_>
+
+ 0 -1 971 8.5886001586914062e-02
+
+ 8.4642998874187469e-02 9.5220798254013062e-01
+ <_>
+
+ 0 -1 972 8.1031002104282379e-02
+
+ -1.4687100052833557e-01 1.9359990358352661e+00
+ <_>
+ 136
+ -3.4265899658203125e+00
+
+ <_>
+
+ 0 -1 973 -3.3840999007225037e-02
+
+ 6.5889501571655273e-01 -6.9755297899246216e-01
+ <_>
+
+ 0 -1 974 1.5410000458359718e-02
+
+ -9.0728402137756348e-01 3.0478599667549133e-01
+ <_>
+
+ 0 -1 975 5.4905999451875687e-02
+
+ -4.9774798750877380e-01 5.7132601737976074e-01
+ <_>
+
+ 0 -1 976 2.1390000358223915e-02
+
+ -4.2565199732780457e-01 5.8096802234649658e-01
+ <_>
+
+ 0 -1 977 7.8849997371435165e-03
+
+ -4.7905999422073364e-01 4.3016499280929565e-01
+ <_>
+
+ 0 -1 978 -3.7544999271631241e-02
+
+ 5.0861597061157227e-01 -1.9985899329185486e-01
+ <_>
+
+ 0 -1 979 1.5925799310207367e-01
+
+ -2.3263600468635559e-01 1.0993319749832153e+00
+ <_>
+
+ 0 -1 980 -6.8939998745918274e-02
+
+ 4.0569001436233521e-01 5.6855000555515289e-02
+ <_>
+
+ 0 -1 981 -3.3695001155138016e-02
+
+ 4.5132800936698914e-01 -3.3332800865173340e-01
+ <_>
+
+ 0 -1 982 -6.3314996659755707e-02
+
+ -8.5015702247619629e-01 2.2341699898242950e-01
+ <_>
+
+ 0 -1 983 7.3699997738003731e-03
+
+ -9.3082201480865479e-01 5.9216998517513275e-02
+ <_>
+
+ 0 -1 984 -9.5969997346401215e-03
+
+ -1.2794899940490723e+00 1.8447299301624298e-01
+ <_>
+
+ 0 -1 985 -1.3067999482154846e-01
+
+ 5.8426898717880249e-01 -2.6007199287414551e-01
+ <_>
+
+ 0 -1 986 5.7402998208999634e-02
+
+ -5.3789000958204269e-02 7.1175599098205566e-01
+ <_>
+
+ 0 -1 987 -7.2340001352131367e-03
+
+ -8.6962199211120605e-01 7.5214996933937073e-02
+ <_>
+
+ 0 -1 988 3.1098999083042145e-02
+
+ -7.5006999075412750e-02 9.0781599283218384e-01
+ <_>
+
+ 0 -1 989 3.5854000598192215e-02
+
+ -2.4795499444007874e-01 7.2272098064422607e-01
+ <_>
+
+ 0 -1 990 -3.1534999608993530e-02
+
+ -1.1238329410552979e+00 2.0988300442695618e-01
+ <_>
+
+ 0 -1 991 -1.9437000155448914e-02
+
+ -1.4499390125274658e+00 -1.5100000426173210e-02
+ <_>
+
+ 0 -1 992 -7.2420001961290836e-03
+
+ 5.3864902257919312e-01 -1.1375399678945541e-01
+ <_>
+
+ 0 -1 993 8.1639997661113739e-03
+
+ 6.6889002919197083e-02 -7.6872897148132324e-01
+ <_>
+
+ 0 -1 994 -4.3653000146150589e-02
+
+ 1.1413530111312866e+00 4.0217000991106033e-02
+ <_>
+
+ 0 -1 995 2.6569999754428864e-02
+
+ -2.4719099700450897e-01 5.9295099973678589e-01
+ <_>
+
+ 0 -1 996 3.2216999679803848e-02
+
+ -4.0024999529123306e-02 3.2688000798225403e-01
+ <_>
+
+ 0 -1 997 -7.2236001491546631e-02
+
+ 5.8729398250579834e-01 -2.5396001338958740e-01
+ <_>
+
+ 0 -1 998 3.1424999237060547e-02
+
+ 1.5315100550651550e-01 -5.6042098999023438e-01
+ <_>
+
+ 0 -1 999 -4.7699999413453043e-04
+
+ 1.6958899796009064e-01 -5.2626699209213257e-01
+ <_>
+
+ 0 -1 1000 2.7189999818801880e-03
+
+ -1.4944599568843842e-01 2.9658699035644531e-01
+ <_>
+
+ 0 -1 1001 3.2875001430511475e-02
+
+ -3.9943501353263855e-01 2.5156599283218384e-01
+ <_>
+
+ 0 -1 1002 -1.4553000219166279e-02
+
+ 2.7972599864006042e-01 -4.7203800082206726e-01
+ <_>
+
+ 0 -1 1003 3.8017999380826950e-02
+
+ -2.9200001154094934e-03 -1.1300059556961060e+00
+ <_>
+
+ 0 -1 1004 2.8659999370574951e-03
+
+ 4.1111800074577332e-01 -2.6220801472663879e-01
+ <_>
+
+ 0 -1 1005 -4.1606999933719635e-02
+
+ -1.4293819665908813e+00 -1.9132999703288078e-02
+ <_>
+
+ 0 -1 1006 -2.4802999570965767e-02
+
+ -2.5013598799705505e-01 1.5978699922561646e-01
+ <_>
+
+ 0 -1 1007 1.0098000057041645e-02
+
+ 4.3738998472690582e-02 -6.9986099004745483e-01
+ <_>
+
+ 0 -1 1008 -2.0947000011801720e-02
+
+ -9.4137799739837646e-01 2.3204000294208527e-01
+ <_>
+
+ 0 -1 1009 2.2458000108599663e-02
+
+ -2.7185800671577454e-01 4.5319199562072754e-01
+ <_>
+
+ 0 -1 1010 -3.7110999226570129e-02
+
+ -1.0314660072326660e+00 1.4421799778938293e-01
+ <_>
+
+ 0 -1 1011 -1.0648000054061413e-02
+
+ 6.3107001781463623e-01 -2.5520798563957214e-01
+ <_>
+
+ 0 -1 1012 5.5422998964786530e-02
+
+ 1.6206599771976471e-01 -1.7722640037536621e+00
+ <_>
+
+ 0 -1 1013 2.1601999178528786e-02
+
+ -2.5016099214553833e-01 5.4119801521301270e-01
+ <_>
+
+ 0 -1 1014 8.7000000348780304e-05
+
+ -2.9008901119232178e-01 3.3507999777793884e-01
+ <_>
+
+ 0 -1 1015 1.4406000263988972e-02
+
+ -7.8840004280209541e-03 -1.1677219867706299e+00
+ <_>
+
+ 0 -1 1016 1.0777399688959122e-01
+
+ 1.1292000114917755e-01 -2.4940319061279297e+00
+ <_>
+
+ 0 -1 1017 3.5943999886512756e-02
+
+ -1.9480599462985992e-01 9.5757502317428589e-01
+ <_>
+
+ 0 -1 1018 -3.9510000497102737e-03
+
+ 3.0927801132202148e-01 -2.5530201196670532e-01
+ <_>
+
+ 0 -1 1019 2.0942000672221184e-02
+
+ -7.6319999061524868e-03 -1.0086350440979004e+00
+ <_>
+
+ 0 -1 1020 -2.9877999797463417e-02
+
+ -4.6027699112892151e-01 1.9507199525833130e-01
+ <_>
+
+ 0 -1 1021 2.5971999391913414e-02
+
+ -1.2187999673187733e-02 -1.0035500526428223e+00
+ <_>
+
+ 0 -1 1022 1.0603000409901142e-02
+
+ -7.5969003140926361e-02 4.1669899225234985e-01
+ <_>
+
+ 0 -1 1023 8.5819996893405914e-03
+
+ -2.6648598909378052e-01 3.9111500978469849e-01
+ <_>
+
+ 0 -1 1024 2.1270999684929848e-02
+
+ 1.8273900449275970e-01 -3.6052298545837402e-01
+ <_>
+
+ 0 -1 1025 7.4518002569675446e-02
+
+ -1.8938399851322174e-01 9.2658001184463501e-01
+ <_>
+
+ 0 -1 1026 4.6569998376071453e-03
+
+ -1.4506199955940247e-01 3.3294600248336792e-01
+ <_>
+
+ 0 -1 1027 1.7119999974966049e-03
+
+ -5.2464002370834351e-01 8.9879997074604034e-02
+ <_>
+
+ 0 -1 1028 9.8500004969537258e-04
+
+ -3.8381999731063843e-01 2.4392999708652496e-01
+ <_>
+
+ 0 -1 1029 2.8233999386429787e-02
+
+ -5.7879998348653316e-03 -1.2617139816284180e+00
+ <_>
+
+ 0 -1 1030 -3.2678000628948212e-02
+
+ -5.7953298091888428e-01 1.6955299675464630e-01
+ <_>
+
+ 0 -1 1031 2.2536000236868858e-02
+
+ 2.2281000390648842e-02 -8.7869602441787720e-01
+ <_>
+
+ 0 -1 1032 -2.1657999604940414e-02
+
+ -6.5108501911163330e-01 1.2966899573802948e-01
+ <_>
+
+ 0 -1 1033 7.6799998059868813e-03
+
+ -3.3965200185775757e-01 2.2013300657272339e-01
+ <_>
+
+ 0 -1 1034 1.4592000283300877e-02
+
+ 1.5077300369739532e-01 -5.0452399253845215e-01
+ <_>
+
+ 0 -1 1035 2.7868000790476799e-02
+
+ -2.5045299530029297e-01 4.5741999149322510e-01
+ <_>
+
+ 0 -1 1036 5.6940000504255295e-03
+
+ -1.0948500037193298e-01 5.5757802724838257e-01
+ <_>
+
+ 0 -1 1037 -1.0002999566495419e-02
+
+ -9.7366297245025635e-01 1.8467999994754791e-02
+ <_>
+
+ 0 -1 1038 -4.0719998069107533e-03
+
+ 3.8222199678421021e-01 -1.6921100020408630e-01
+ <_>
+
+ 0 -1 1039 -2.2593999281525612e-02
+
+ -1.0391089916229248e+00 5.1839998923242092e-03
+ <_>
+
+ 0 -1 1040 -3.9579998701810837e-02
+
+ -5.5109229087829590e+00 1.1163999885320663e-01
+ <_>
+
+ 0 -1 1041 -1.7537999898195267e-02
+
+ 9.5485800504684448e-01 -1.8584500253200531e-01
+ <_>
+
+ 0 -1 1042 9.0300003066658974e-03
+
+ 1.0436000302433968e-02 8.2114797830581665e-01
+ <_>
+
+ 0 -1 1043 -7.9539995640516281e-03
+
+ 2.2632899880409241e-01 -3.4568199515342712e-01
+ <_>
+
+ 0 -1 1044 2.7091000229120255e-02
+
+ 1.6430099308490753e-01 -1.3926379680633545e+00
+ <_>
+
+ 0 -1 1045 -2.0625999197363853e-02
+
+ -8.6366099119186401e-01 2.3880000226199627e-03
+ <_>
+
+ 0 -1 1046 -7.1989998221397400e-02
+
+ -2.8192629814147949e+00 1.1570499837398529e-01
+ <_>
+
+ 0 -1 1047 -2.6964999735355377e-02
+
+ -1.2946130037307739e+00 -2.4661000818014145e-02
+ <_>
+
+ 0 -1 1048 -4.7377999871969223e-02
+
+ -8.1306397914886475e-01 1.1831399798393250e-01
+ <_>
+
+ 0 -1 1049 -1.0895600169897079e-01
+
+ 6.5937900543212891e-01 -2.0843900740146637e-01
+ <_>
+
+ 0 -1 1050 1.3574000447988510e-02
+
+ 7.4240001849830151e-03 5.3152197599411011e-01
+ <_>
+
+ 0 -1 1051 -6.6920001991093159e-03
+
+ 3.0655801296234131e-01 -3.1084299087524414e-01
+ <_>
+
+ 0 -1 1052 -3.9070001803338528e-03
+
+ 2.5576499104499817e-01 -5.2932001650333405e-02
+ <_>
+
+ 0 -1 1053 -3.7613000720739365e-02
+
+ -1.4350049495697021e+00 -1.5448000282049179e-02
+ <_>
+
+ 0 -1 1054 8.6329998448491096e-03
+
+ -1.6884399950504303e-01 4.2124900221824646e-01
+ <_>
+
+ 0 -1 1055 -3.2097000628709793e-02
+
+ -6.4979398250579834e-01 4.1110001504421234e-02
+ <_>
+
+ 0 -1 1056 5.8495998382568359e-02
+
+ -5.2963998168706894e-02 6.3368302583694458e-01
+ <_>
+
+ 0 -1 1057 -4.0901999920606613e-02
+
+ -9.2101097106933594e-01 9.0640000998973846e-03
+ <_>
+
+ 0 -1 1058 -1.9925000146031380e-02
+
+ 5.3759998083114624e-01 -6.2996998429298401e-02
+ <_>
+
+ 0 -1 1059 -4.6020001173019409e-03
+
+ -5.4333502054214478e-01 8.4104999899864197e-02
+ <_>
+
+ 0 -1 1060 1.6824999824166298e-02
+
+ 1.5563699603080750e-01 -4.0171200037002563e-01
+ <_>
+
+ 0 -1 1061 9.4790002331137657e-03
+
+ -2.4245299398899078e-01 5.1509499549865723e-01
+ <_>
+
+ 0 -1 1062 -1.9534999504685402e-02
+
+ -5.1118397712707520e-01 1.3831999897956848e-01
+ <_>
+
+ 0 -1 1063 1.0746000334620476e-02
+
+ -2.1854999661445618e-01 6.2828701734542847e-01
+ <_>
+
+ 0 -1 1064 3.7927001714706421e-02
+
+ 1.1640299856662750e-01 -2.7301959991455078e+00
+ <_>
+
+ 0 -1 1065 1.6390999779105186e-02
+
+ -1.4635999687016010e-02 -1.0797250270843506e+00
+ <_>
+
+ 0 -1 1066 -1.9785000011324883e-02
+
+ 1.2166420221328735e+00 3.3275000751018524e-02
+ <_>
+
+ 0 -1 1067 1.1067000217735767e-02
+
+ -2.5388300418853760e-01 4.4038599729537964e-01
+ <_>
+
+ 0 -1 1068 5.2479999139904976e-03
+
+ 2.2496800124645233e-01 -2.4216499924659729e-01
+ <_>
+
+ 0 -1 1069 -1.1141999624669552e-02
+
+ 2.5018098950386047e-01 -3.0811500549316406e-01
+ <_>
+
+ 0 -1 1070 -1.0666999965906143e-02
+
+ -3.2729101181030273e-01 2.6168298721313477e-01
+ <_>
+
+ 0 -1 1071 1.0545299947261810e-01
+
+ -5.5750001221895218e-02 -1.9605729579925537e+00
+ <_>
+
+ 0 -1 1072 5.4827999323606491e-02
+
+ -1.9519999623298645e-03 7.3866099119186401e-01
+ <_>
+
+ 0 -1 1073 1.7760999500751495e-02
+
+ -3.0647200345993042e-01 2.6346999406814575e-01
+ <_>
+
+ 0 -1 1074 -3.1185999512672424e-02
+
+ -2.4600900709629059e-01 1.7082199454307556e-01
+ <_>
+
+ 0 -1 1075 -5.7296000421047211e-02
+
+ 4.7033500671386719e-01 -2.6048299670219421e-01
+ <_>
+
+ 0 -1 1076 -1.1312000453472137e-02
+
+ 3.8628900051116943e-01 -2.8817000985145569e-01
+ <_>
+
+ 0 -1 1077 3.0592000111937523e-02
+
+ -4.8826001584529877e-02 -1.7638969421386719e+00
+ <_>
+
+ 0 -1 1078 1.8489999929443002e-03
+
+ 2.1099899709224701e-01 -2.5940999388694763e-02
+ <_>
+
+ 0 -1 1079 1.1419000104069710e-02
+
+ -1.6829599440097809e-01 1.0278660058975220e+00
+ <_>
+
+ 0 -1 1080 8.1403002142906189e-02
+
+ 1.1531999707221985e-01 -1.2482399940490723e+00
+ <_>
+
+ 0 -1 1081 5.3495999425649643e-02
+
+ -4.6303998678922653e-02 -1.7165969610214233e+00
+ <_>
+
+ 0 -1 1082 -2.3948000743985176e-02
+
+ -4.0246599912643433e-01 2.0562100410461426e-01
+ <_>
+
+ 0 -1 1083 6.7690000869333744e-03
+
+ -3.3152300119400024e-01 2.0683400332927704e-01
+ <_>
+
+ 0 -1 1084 -3.2343998551368713e-02
+
+ -7.2632801532745361e-01 2.0073500275611877e-01
+ <_>
+
+ 0 -1 1085 3.7863001227378845e-02
+
+ -1.5631000697612762e-01 1.6697460412979126e+00
+ <_>
+
+ 0 -1 1086 1.5440000221133232e-02
+
+ 1.9487400352954865e-01 -3.5384199023246765e-01
+ <_>
+
+ 0 -1 1087 -4.4376000761985779e-02
+
+ 8.2093602418899536e-01 -1.8193599581718445e-01
+ <_>
+
+ 0 -1 1088 -2.3102000355720520e-02
+
+ -4.3044099211692810e-01 1.2375400215387344e-01
+ <_>
+
+ 0 -1 1089 1.9400000572204590e-02
+
+ -2.9726000502705574e-02 -1.1597590446472168e+00
+ <_>
+
+ 0 -1 1090 1.0385700315237045e-01
+
+ 1.1149899661540985e-01 -4.6835222244262695e+00
+ <_>
+
+ 0 -1 1091 -1.8964000046253204e-02
+
+ 2.1773819923400879e+00 -1.4544400572776794e-01
+ <_>
+
+ 0 -1 1092 3.8750998675823212e-02
+
+ -4.9446001648902893e-02 3.4018298983573914e-01
+ <_>
+
+ 0 -1 1093 2.2766999900341034e-02
+
+ -3.2802999019622803e-01 3.0531400442123413e-01
+ <_>
+
+ 0 -1 1094 -3.1357001513242722e-02
+
+ 1.1520819664001465e+00 2.7305999770760536e-02
+ <_>
+
+ 0 -1 1095 9.6909999847412109e-03
+
+ -3.8799500465393066e-01 2.1512599289417267e-01
+ <_>
+
+ 0 -1 1096 -4.9284998327493668e-02
+
+ -1.6774909496307373e+00 1.5774199366569519e-01
+ <_>
+
+ 0 -1 1097 -3.9510998874902725e-02
+
+ -9.7647899389266968e-01 -1.0552000254392624e-02
+ <_>
+
+ 0 -1 1098 4.7997999936342239e-02
+
+ 2.0843900740146637e-01 -6.8992799520492554e-01
+ <_>
+
+ 0 -1 1099 5.1422998309135437e-02
+
+ -1.6665300726890564e-01 1.2149239778518677e+00
+ <_>
+
+ 0 -1 1100 1.4279999770224094e-02
+
+ 2.3627699911594391e-01 -4.1396799683570862e-01
+ <_>
+
+ 0 -1 1101 -9.1611996293067932e-02
+
+ -9.2830902338027954e-01 -1.8345000222325325e-02
+ <_>
+
+ 0 -1 1102 6.5080001950263977e-03
+
+ -7.3647201061248779e-01 1.9497099518775940e-01
+ <_>
+
+ 0 -1 1103 3.5723000764846802e-02
+
+ 1.4197799563407898e-01 -4.2089301347732544e-01
+ <_>
+
+ 0 -1 1104 5.0638001412153244e-02
+
+ 1.1644000187516212e-02 7.8486597537994385e-01
+ <_>
+
+ 0 -1 1105 -1.4613999985158443e-02
+
+ -1.1909500360488892e+00 -3.5128001123666763e-02
+ <_>
+
+ 0 -1 1106 -3.8662999868392944e-02
+
+ 2.4314730167388916e+00 6.5647996962070465e-02
+ <_>
+
+ 0 -1 1107 -4.0346998721361160e-02
+
+ 7.1755301952362061e-01 -1.9108299911022186e-01
+ <_>
+
+ 0 -1 1108 2.3902000859379768e-02
+
+ 1.5646199882030487e-01 -7.9294800758361816e-01
+ <_>
+ 137
+ -3.5125269889831543e+00
+
+ <_>
+
+ 0 -1 1109 8.5640000179409981e-03
+
+ -8.1450700759887695e-01 5.8875298500061035e-01
+ <_>
+
+ 0 -1 1110 -1.3292600214481354e-01
+
+ 9.3213397264480591e-01 -2.9367300868034363e-01
+ <_>
+
+ 0 -1 1111 9.8400004208087921e-03
+
+ -5.6462901830673218e-01 4.1647699475288391e-01
+ <_>
+
+ 0 -1 1112 5.0889998674392700e-03
+
+ -7.9232800006866455e-01 1.6975000500679016e-01
+ <_>
+
+ 0 -1 1113 -6.1039000749588013e-02
+
+ -1.4169000387191772e+00 2.5020999833941460e-02
+ <_>
+
+ 0 -1 1114 -4.6599999768659472e-04
+
+ 3.7982499599456787e-01 -4.1567099094390869e-01
+ <_>
+
+ 0 -1 1115 3.3889999613165855e-03
+
+ -4.0768599510192871e-01 3.5548499226570129e-01
+ <_>
+
+ 0 -1 1116 2.1006999537348747e-02
+
+ -2.4080100655555725e-01 8.6112701892852783e-01
+ <_>
+
+ 0 -1 1117 7.5559997931122780e-03
+
+ -8.7467199563980103e-01 9.8572000861167908e-02
+ <_>
+
+ 0 -1 1118 2.4779999628663063e-02
+
+ 1.5566200017929077e-01 -6.9229799509048462e-01
+ <_>
+
+ 0 -1 1119 -3.5620000213384628e-02
+
+ -1.1472270488739014e+00 3.6359999328851700e-02
+ <_>
+
+ 0 -1 1120 1.9810000434517860e-02
+
+ 1.5516200661659241e-01 -6.9520097970962524e-01
+ <_>
+
+ 0 -1 1121 1.5019999817013741e-02
+
+ 4.1990000754594803e-02 -9.6622800827026367e-01
+ <_>
+
+ 0 -1 1122 -2.3137999698519707e-02
+
+ 4.3396899104118347e-01 2.4160000029951334e-03
+ <_>
+
+ 0 -1 1123 -1.8743000924587250e-02
+
+ 4.3481099605560303e-01 -3.2522499561309814e-01
+ <_>
+
+ 0 -1 1124 4.5080000162124634e-01
+
+ -9.4573996961116791e-02 7.2421300411224365e-01
+ <_>
+
+ 0 -1 1125 1.1854999698698521e-02
+
+ -3.8133099675178528e-01 3.0098399519920349e-01
+ <_>
+
+ 0 -1 1126 -2.4830000475049019e-02
+
+ 8.9300602674484253e-01 -1.0295899957418442e-01
+ <_>
+
+ 0 -1 1127 -4.4743001461029053e-02
+
+ 8.6280298233032227e-01 -2.1716499328613281e-01
+ <_>
+
+ 0 -1 1128 -1.4600000344216824e-02
+
+ 6.0069400072097778e-01 -1.5906299650669098e-01
+ <_>
+
+ 0 -1 1129 -2.4527000263333321e-02
+
+ -1.5872869491577148e+00 -2.1817000582814217e-02
+ <_>
+
+ 0 -1 1130 2.3024000227451324e-02
+
+ 1.6853399574756622e-01 -3.8106900453567505e-01
+ <_>
+
+ 0 -1 1131 -2.4917000904679298e-02
+
+ 5.0810897350311279e-01 -2.7279898524284363e-01
+ <_>
+
+ 0 -1 1132 1.0130000300705433e-03
+
+ -4.3138799071311951e-01 2.6438099145889282e-01
+ <_>
+
+ 0 -1 1133 1.5603000298142433e-02
+
+ -3.1624200940132141e-01 5.5715900659561157e-01
+ <_>
+
+ 0 -1 1134 -2.6685999706387520e-02
+
+ 1.0553920269012451e+00 2.9074000194668770e-02
+ <_>
+
+ 0 -1 1135 1.3940000208094716e-03
+
+ -7.1873801946640015e-01 6.5390996634960175e-02
+ <_>
+
+ 0 -1 1136 -6.4799998654052615e-04
+
+ 2.4884399771690369e-01 -2.0978200435638428e-01
+ <_>
+
+ 0 -1 1137 -3.1888000667095184e-02
+
+ -6.8844497203826904e-01 6.3589997589588165e-02
+ <_>
+
+ 0 -1 1138 -4.9290000461041927e-03
+
+ -5.9152501821517944e-01 2.7943599224090576e-01
+ <_>
+
+ 0 -1 1139 3.1168000772595406e-02
+
+ 4.5223999768495560e-02 -8.8639199733734131e-01
+ <_>
+
+ 0 -1 1140 -3.3663000911474228e-02
+
+ -6.1590200662612915e-01 1.5749299526214600e-01
+ <_>
+
+ 0 -1 1141 1.1966999620199203e-02
+
+ -3.0606698989868164e-01 4.2293301224708557e-01
+ <_>
+
+ 0 -1 1142 -3.4680001437664032e-02
+
+ -1.3734940290451050e+00 1.5908700227737427e-01
+ <_>
+
+ 0 -1 1143 9.9290004000067711e-03
+
+ -5.5860197544097900e-01 1.2119200080633163e-01
+ <_>
+
+ 0 -1 1144 5.9574998915195465e-02
+
+ 4.9720001406967640e-03 8.2055401802062988e-01
+ <_>
+
+ 0 -1 1145 -6.5428003668785095e-02
+
+ 1.5651429891586304e+00 -1.6817499697208405e-01
+ <_>
+
+ 0 -1 1146 -9.2895999550819397e-02
+
+ -1.5794529914855957e+00 1.4661799371242523e-01
+ <_>
+
+ 0 -1 1147 -4.1184000670909882e-02
+
+ -1.5518720149993896e+00 -2.9969999566674232e-02
+ <_>
+
+ 0 -1 1148 2.1447999402880669e-02
+
+ 1.7196300625801086e-01 -6.9343197345733643e-01
+ <_>
+
+ 0 -1 1149 -2.5569999590516090e-02
+
+ -1.3061310052871704e+00 -2.4336999282240868e-02
+ <_>
+
+ 0 -1 1150 -4.1200999170541763e-02
+
+ -1.3821059465408325e+00 1.4801800251007080e-01
+ <_>
+
+ 0 -1 1151 -1.7668999731540680e-02
+
+ -7.0889997482299805e-01 3.6524001508951187e-02
+ <_>
+
+ 0 -1 1152 9.0060001239180565e-03
+
+ -4.0913999080657959e-02 8.0373102426528931e-01
+ <_>
+
+ 0 -1 1153 -1.1652999557554722e-02
+
+ 5.7546800374984741e-01 -2.4991700053215027e-01
+ <_>
+
+ 0 -1 1154 -7.4780001305043697e-03
+
+ -4.9280899763107300e-01 1.9810900092124939e-01
+ <_>
+
+ 0 -1 1155 8.5499999113380909e-04
+
+ -4.8858100175857544e-01 1.3563099503517151e-01
+ <_>
+
+ 0 -1 1156 -3.0538000166416168e-02
+
+ -6.0278397798538208e-01 1.8522000312805176e-01
+ <_>
+
+ 0 -1 1157 -1.8846999853849411e-02
+
+ 2.3565599322319031e-01 -3.5136300325393677e-01
+ <_>
+
+ 0 -1 1158 -8.1129996106028557e-03
+
+ -8.1304997205734253e-02 2.1069599688053131e-01
+ <_>
+
+ 0 -1 1159 -3.4830000251531601e-02
+
+ -1.2065670490264893e+00 -1.4251999557018280e-02
+ <_>
+
+ 0 -1 1160 1.9021000713109970e-02
+
+ 2.3349900543689728e-01 -4.5664900541305542e-01
+ <_>
+
+ 0 -1 1161 -1.9004000350832939e-02
+
+ -8.1075799465179443e-01 1.3140000402927399e-02
+ <_>
+
+ 0 -1 1162 -8.9057996869087219e-02
+
+ 6.1542397737503052e-01 3.2983001321554184e-02
+ <_>
+
+ 0 -1 1163 6.8620000965893269e-03
+
+ -2.9583099484443665e-01 2.7003699541091919e-01
+ <_>
+
+ 0 -1 1164 -2.8240999206900597e-02
+
+ -6.1102700233459473e-01 1.7357499897480011e-01
+ <_>
+
+ 0 -1 1165 -3.2099999953061342e-04
+
+ -5.3322899341583252e-01 6.8539001047611237e-02
+ <_>
+
+ 0 -1 1166 -1.0829100012779236e-01
+
+ -1.2879559993743896e+00 1.1801700294017792e-01
+ <_>
+
+ 0 -1 1167 1.5878999605774879e-02
+
+ -1.7072600126266479e-01 1.1103910207748413e+00
+ <_>
+
+ 0 -1 1168 8.6859995499253273e-03
+
+ -1.0995099693536758e-01 4.6010500192642212e-01
+ <_>
+
+ 0 -1 1169 -2.5234999135136604e-02
+
+ 1.0220669507980347e+00 -1.8694299459457397e-01
+ <_>
+
+ 0 -1 1170 -1.3508999720215797e-02
+
+ -7.8316599130630493e-01 1.4202600717544556e-01
+ <_>
+
+ 0 -1 1171 -7.7149998396635056e-03
+
+ -8.8060700893402100e-01 1.1060000397264957e-02
+ <_>
+
+ 0 -1 1172 7.1580000221729279e-02
+
+ 1.1369399726390839e-01 -1.1032789945602417e+00
+ <_>
+
+ 0 -1 1173 -1.3554000295698643e-02
+
+ -8.1096500158309937e-01 3.4080001059919596e-03
+ <_>
+
+ 0 -1 1174 2.9450000729411840e-03
+
+ -7.2879999876022339e-02 3.4998100996017456e-01
+ <_>
+
+ 0 -1 1175 -5.0833001732826233e-02
+
+ -1.2868590354919434e+00 -2.8842000290751457e-02
+ <_>
+
+ 0 -1 1176 -8.7989997118711472e-03
+
+ 4.7613599896430969e-01 -1.4690400660037994e-01
+ <_>
+
+ 0 -1 1177 2.1424399316310883e-01
+
+ -5.9702001512050629e-02 -2.4802260398864746e+00
+ <_>
+
+ 0 -1 1178 1.3962999917566776e-02
+
+ 1.7420299351215363e-01 -4.3911001086235046e-01
+ <_>
+
+ 0 -1 1179 4.2502000927925110e-02
+
+ -1.9965299963951111e-01 7.0654797554016113e-01
+ <_>
+
+ 0 -1 1180 1.9827999174594879e-02
+
+ -6.9136001169681549e-02 6.1643397808074951e-01
+ <_>
+
+ 0 -1 1181 -3.3560000360012054e-02
+
+ -1.2740780115127563e+00 -2.5673000141978264e-02
+ <_>
+
+ 0 -1 1182 6.3542999327182770e-02
+
+ 1.2403500080108643e-01 -1.0776289701461792e+00
+ <_>
+
+ 0 -1 1183 2.1933000534772873e-02
+
+ 1.4952000230550766e-02 -7.1023499965667725e-01
+ <_>
+
+ 0 -1 1184 -7.8424997627735138e-02
+
+ 6.2033998966217041e-01 3.3610999584197998e-02
+ <_>
+
+ 0 -1 1185 1.4390000142157078e-02
+
+ -3.6324599385261536e-01 1.7308300733566284e-01
+ <_>
+
+ 0 -1 1186 -6.7309997975826263e-02
+
+ 5.2374100685119629e-01 1.2799999676644802e-02
+ <_>
+
+ 0 -1 1187 1.3047499954700470e-01
+
+ -1.7122499644756317e-01 1.1235200166702271e+00
+ <_>
+
+ 0 -1 1188 -4.6245999634265900e-02
+
+ -1.1908329725265503e+00 1.7425599694252014e-01
+ <_>
+
+ 0 -1 1189 -2.9842000454664230e-02
+
+ 8.3930599689483643e-01 -1.8064199388027191e-01
+ <_>
+
+ 0 -1 1190 -3.8099999073892832e-04
+
+ 3.5532799363136292e-01 -2.3842300474643707e-01
+ <_>
+
+ 0 -1 1191 -2.2378999739885330e-02
+
+ -8.7943899631500244e-01 -7.8399997437372804e-04
+ <_>
+
+ 0 -1 1192 -1.5569999814033508e-03
+
+ -1.4253300428390503e-01 2.5876200199127197e-01
+ <_>
+
+ 0 -1 1193 1.2013000436127186e-02
+
+ -2.9015499353408813e-01 2.6051101088523865e-01
+ <_>
+
+ 0 -1 1194 2.4384999647736549e-02
+
+ -3.1438998878002167e-02 5.8695900440216064e-01
+ <_>
+
+ 0 -1 1195 -4.7180999070405960e-02
+
+ 6.9430100917816162e-01 -2.1816100180149078e-01
+ <_>
+
+ 0 -1 1196 -2.4893999099731445e-02
+
+ -6.4599299430847168e-01 1.5611599385738373e-01
+ <_>
+
+ 0 -1 1197 2.1944999694824219e-02
+
+ -2.7742000296711922e-02 -1.1346880197525024e+00
+ <_>
+
+ 0 -1 1198 1.8809899687767029e-01
+
+ -1.0076000355184078e-02 1.2429029941558838e+00
+ <_>
+
+ 0 -1 1199 -7.7872000634670258e-02
+
+ 8.5008001327514648e-01 -1.9015499949455261e-01
+ <_>
+
+ 0 -1 1200 -4.8769000917673111e-02
+
+ -2.0763080120086670e+00 1.2179400026798248e-01
+ <_>
+
+ 0 -1 1201 -1.7115000635385513e-02
+
+ -8.5687297582626343e-01 7.8760003671050072e-03
+ <_>
+
+ 0 -1 1202 -2.7499999850988388e-03
+
+ 3.8645499944686890e-01 -1.1391499638557434e-01
+ <_>
+
+ 0 -1 1203 -9.8793998360633850e-02
+
+ -1.7233899831771851e+00 -5.6063000112771988e-02
+ <_>
+
+ 0 -1 1204 -2.1936999633908272e-02
+
+ 5.4749399423599243e-01 -4.2481999844312668e-02
+ <_>
+
+ 0 -1 1205 6.1096999794244766e-02
+
+ -3.8945000618696213e-02 -1.0807880163192749e+00
+ <_>
+
+ 0 -1 1206 -2.4563999846577644e-02
+
+ 5.8311098814010620e-01 -9.7599998116493225e-04
+ <_>
+
+ 0 -1 1207 3.3752001821994781e-02
+
+ -1.3795999810099602e-02 -8.4730297327041626e-01
+ <_>
+
+ 0 -1 1208 3.8199000060558319e-02
+
+ 1.5114299952983856e-01 -7.9473400115966797e-01
+ <_>
+
+ 0 -1 1209 -2.0117999985814095e-02
+
+ 5.1579099893569946e-01 -2.1445399522781372e-01
+ <_>
+
+ 0 -1 1210 2.4734999984502792e-02
+
+ -2.2105000913143158e-02 4.2917698621749878e-01
+ <_>
+
+ 0 -1 1211 -2.4357000365853310e-02
+
+ -8.6201298236846924e-01 -3.6760000512003899e-03
+ <_>
+
+ 0 -1 1212 -2.6442000642418861e-02
+
+ -4.5397499203681946e-01 2.2462800145149231e-01
+ <_>
+
+ 0 -1 1213 -3.4429999068379402e-03
+
+ 1.3073000311851501e-01 -3.8622701168060303e-01
+ <_>
+
+ 0 -1 1214 1.0701700299978256e-01
+
+ 1.3158600032329559e-01 -7.9306900501251221e-01
+ <_>
+
+ 0 -1 1215 4.5152999460697174e-02
+
+ -2.5296801328659058e-01 4.0672400593757629e-01
+ <_>
+
+ 0 -1 1216 4.4349998235702515e-02
+
+ 2.2613000124692917e-02 7.9618102312088013e-01
+ <_>
+
+ 0 -1 1217 1.0839999886229634e-03
+
+ -3.9158400893211365e-01 1.1639100313186646e-01
+ <_>
+
+ 0 -1 1218 7.1433000266551971e-02
+
+ 8.2466997206211090e-02 1.2530590295791626e+00
+ <_>
+
+ 0 -1 1219 3.5838000476360321e-02
+
+ -1.8203300237655640e-01 7.7078700065612793e-01
+ <_>
+
+ 0 -1 1220 -2.0839000120759010e-02
+
+ -6.1744397878646851e-01 1.5891399979591370e-01
+ <_>
+
+ 0 -1 1221 4.2525801062583923e-01
+
+ -4.8978000879287720e-02 -1.8422030210494995e+00
+ <_>
+
+ 0 -1 1222 1.1408000253140926e-02
+
+ 1.7918199300765991e-01 -1.5383499860763550e-01
+ <_>
+
+ 0 -1 1223 -1.5364999882876873e-02
+
+ -8.4016501903533936e-01 -1.0280000278726220e-03
+ <_>
+
+ 0 -1 1224 -1.5212000347673893e-02
+
+ -1.8995699286460876e-01 1.7130999267101288e-01
+ <_>
+
+ 0 -1 1225 -1.8972000107169151e-02
+
+ -7.9541999101638794e-01 6.6800001077353954e-03
+ <_>
+
+ 0 -1 1226 -3.3330000005662441e-03
+
+ -2.3530800640583038e-01 2.4730099737644196e-01
+ <_>
+
+ 0 -1 1227 9.3248002231121063e-02
+
+ -5.4758001118898392e-02 -1.8324300050735474e+00
+ <_>
+
+ 0 -1 1228 -1.2555000372231007e-02
+
+ 2.6385200023651123e-01 -3.8526400923728943e-01
+ <_>
+
+ 0 -1 1229 -2.7070000767707825e-02
+
+ -6.6929799318313599e-01 2.0340999588370323e-02
+ <_>
+
+ 0 -1 1230 -2.3677000775933266e-02
+
+ 6.7265301942825317e-01 -1.4344000257551670e-02
+ <_>
+
+ 0 -1 1231 -1.4275000430643559e-02
+
+ 3.0186399817466736e-01 -2.8514400124549866e-01
+ <_>
+
+ 0 -1 1232 2.8096999973058701e-02
+
+ 1.4766000211238861e-01 -1.4078520536422729e+00
+ <_>
+
+ 0 -1 1233 5.0840001553297043e-02
+
+ -1.8613600730895996e-01 7.9953002929687500e-01
+ <_>
+
+ 0 -1 1234 1.1505999602377415e-02
+
+ 1.9118399918079376e-01 -8.5035003721714020e-02
+ <_>
+
+ 0 -1 1235 -1.4661000110208988e-02
+
+ 4.5239299535751343e-01 -2.2205199301242828e-01
+ <_>
+
+ 0 -1 1236 2.2842499613761902e-01
+
+ 1.3488399982452393e-01 -1.2894610166549683e+00
+ <_>
+
+ 0 -1 1237 1.1106900125741959e-01
+
+ -2.0753799378871918e-01 5.4561597108840942e-01
+ <_>
+
+ 0 -1 1238 3.2450000289827585e-03
+
+ 3.2053700089454651e-01 -1.6403500735759735e-01
+ <_>
+
+ 0 -1 1239 8.5309997200965881e-02
+
+ -2.0210500061511993e-01 5.3296798467636108e-01
+ <_>
+
+ 0 -1 1240 2.2048000246286392e-02
+
+ 1.5698599815368652e-01 -1.7014099657535553e-01
+ <_>
+
+ 0 -1 1241 -1.5676999464631081e-02
+
+ -6.2863498926162720e-01 4.0761999785900116e-02
+ <_>
+
+ 0 -1 1242 3.3112901449203491e-01
+
+ 1.6609300673007965e-01 -1.0326379537582397e+00
+ <_>
+
+ 0 -1 1243 8.8470000773668289e-03
+
+ -2.5076198577880859e-01 3.1660598516464233e-01
+ <_>
+
+ 0 -1 1244 4.6080000698566437e-02
+
+ 1.5352100133895874e-01 -1.6333500146865845e+00
+ <_>
+
+ 0 -1 1245 -3.7703000009059906e-02
+
+ 5.6873798370361328e-01 -2.0102599263191223e-01
+ <_>
+ 159
+ -3.5939640998840332e+00
+
+ <_>
+
+ 0 -1 1246 -8.1808999180793762e-02
+
+ 5.7124799489974976e-01 -6.7438799142837524e-01
+ <_>
+
+ 0 -1 1247 2.1761199831962585e-01
+
+ -3.8610199093818665e-01 9.0343999862670898e-01
+ <_>
+
+ 0 -1 1248 1.4878000132739544e-02
+
+ 2.2241599857807159e-01 -1.2779350280761719e+00
+ <_>
+
+ 0 -1 1249 5.2434999495744705e-02
+
+ -2.8690400719642639e-01 7.5742298364639282e-01
+ <_>
+
+ 0 -1 1250 9.1429995372891426e-03
+
+ -6.4880400896072388e-01 2.2268800437450409e-01
+ <_>
+
+ 0 -1 1251 7.9169999808073044e-03
+
+ -2.9253599047660828e-01 3.1030198931694031e-01
+ <_>
+
+ 0 -1 1252 -2.6084000244736671e-02
+
+ 4.5532700419425964e-01 -3.8500601053237915e-01
+ <_>
+
+ 0 -1 1253 -2.9400000348687172e-03
+
+ -5.1264399290084839e-01 2.7432298660278320e-01
+ <_>
+
+ 0 -1 1254 5.7130001485347748e-02
+
+ 1.5788000077009201e-02 -1.2133100032806396e+00
+ <_>
+
+ 0 -1 1255 -6.1309998854994774e-03
+
+ 3.9174601435661316e-01 -3.0866798758506775e-01
+ <_>
+
+ 0 -1 1256 -4.0405001491308212e-02
+
+ 1.1901949644088745e+00 -2.0347100496292114e-01
+ <_>
+
+ 0 -1 1257 -2.0297000184655190e-02
+
+ -6.8239498138427734e-01 2.0458699762821198e-01
+ <_>
+
+ 0 -1 1258 -1.7188999801874161e-02
+
+ -8.4939897060394287e-01 3.8433000445365906e-02
+ <_>
+
+ 0 -1 1259 -2.4215999990701675e-02
+
+ -1.1039420366287231e+00 1.5975099802017212e-01
+ <_>
+
+ 0 -1 1260 5.6869000196456909e-02
+
+ -1.9595299661159515e-01 1.1806850433349609e+00
+ <_>
+
+ 0 -1 1261 3.6199999158270657e-04
+
+ -4.0847799181938171e-01 3.2938599586486816e-01
+ <_>
+
+ 0 -1 1262 9.9790003150701523e-03
+
+ -2.9673001170158386e-01 4.1547900438308716e-01
+ <_>
+
+ 0 -1 1263 -5.2625000476837158e-02
+
+ -1.3069299459457397e+00 1.7862600088119507e-01
+ <_>
+
+ 0 -1 1264 -1.3748999685049057e-02
+
+ 2.3665800690650940e-01 -4.4536599516868591e-01
+ <_>
+
+ 0 -1 1265 -3.0517000705003738e-02
+
+ 2.9018300771713257e-01 -1.1210100352764130e-01
+ <_>
+
+ 0 -1 1266 -3.0037501454353333e-01
+
+ -2.4237680435180664e+00 -4.2830999940633774e-02
+ <_>
+
+ 0 -1 1267 -3.5990998148918152e-02
+
+ 8.8206499814987183e-01 -4.7012999653816223e-02
+ <_>
+
+ 0 -1 1268 -5.5112000554800034e-02
+
+ 8.0119001865386963e-01 -2.0490999519824982e-01
+ <_>
+
+ 0 -1 1269 3.3762000501155853e-02
+
+ 1.4617599546909332e-01 -1.1349489688873291e+00
+ <_>
+
+ 0 -1 1270 -8.2710003480315208e-03
+
+ -8.1604897975921631e-01 1.8988000229001045e-02
+ <_>
+
+ 0 -1 1271 -5.4399999789893627e-03
+
+ -7.0980900526046753e-01 2.2343699634075165e-01
+ <_>
+
+ 0 -1 1272 3.1059999018907547e-03
+
+ -7.2808599472045898e-01 4.0224999189376831e-02
+ <_>
+
+ 0 -1 1273 5.3651999682188034e-02
+
+ 1.7170900106430054e-01 -1.1163710355758667e+00
+ <_>
+
+ 0 -1 1274 -1.2541399896144867e-01
+
+ 2.7680370807647705e+00 -1.4611500501632690e-01
+ <_>
+
+ 0 -1 1275 9.2542000114917755e-02
+
+ 1.1609800159931183e-01 -3.9635529518127441e+00
+ <_>
+
+ 0 -1 1276 3.8513999432325363e-02
+
+ -7.6399999670684338e-03 -9.8780900239944458e-01
+ <_>
+
+ 0 -1 1277 -2.0200000144541264e-03
+
+ 2.3059999942779541e-01 -7.4970299005508423e-01
+ <_>
+
+ 0 -1 1278 9.7599998116493225e-03
+
+ -3.1137999892234802e-01 3.0287799239158630e-01
+ <_>
+
+ 0 -1 1279 2.4095000699162483e-02
+
+ -4.9529999494552612e-02 5.2690100669860840e-01
+ <_>
+
+ 0 -1 1280 -1.7982000485062599e-02
+
+ -1.1610640287399292e+00 -5.7000000961124897e-03
+ <_>
+
+ 0 -1 1281 -1.0555000044405460e-02
+
+ -2.7189099788665771e-01 2.3597699403762817e-01
+ <_>
+
+ 0 -1 1282 -7.2889998555183411e-03
+
+ -5.4219102859497070e-01 8.1914000213146210e-02
+ <_>
+
+ 0 -1 1283 2.3939000442624092e-02
+
+ 1.7975799739360809e-01 -6.7049497365951538e-01
+ <_>
+
+ 0 -1 1284 -1.8365999683737755e-02
+
+ 6.2664300203323364e-01 -2.0970100164413452e-01
+ <_>
+
+ 0 -1 1285 1.5715999528765678e-02
+
+ 2.4193699657917023e-01 -1.0444309711456299e+00
+ <_>
+
+ 0 -1 1286 -4.8804000020027161e-02
+
+ -9.4060599803924561e-01 -3.7519999314099550e-03
+ <_>
+
+ 0 -1 1287 6.7130001261830330e-03
+
+ -7.5432002544403076e-02 6.1575299501419067e-01
+ <_>
+
+ 0 -1 1288 9.7770001739263535e-03
+
+ 3.9285000413656235e-02 -8.4810298681259155e-01
+ <_>
+
+ 0 -1 1289 1.4744999818503857e-02
+
+ 1.6968999803066254e-01 -5.0906401872634888e-01
+ <_>
+
+ 0 -1 1290 9.7079001367092133e-02
+
+ -3.3103000372648239e-02 -1.2706379890441895e+00
+ <_>
+
+ 0 -1 1291 4.8285998404026031e-02
+
+ 9.4329997897148132e-02 2.7203190326690674e+00
+ <_>
+
+ 0 -1 1292 9.7810002043843269e-03
+
+ -3.9533400535583496e-01 1.5363800525665283e-01
+ <_>
+
+ 0 -1 1293 -3.9893999695777893e-02
+
+ -2.2767400741577148e-01 1.3913999497890472e-01
+ <_>
+
+ 0 -1 1294 2.2848000749945641e-02
+
+ -2.7391999959945679e-01 3.4199500083923340e-01
+ <_>
+
+ 0 -1 1295 6.7179999314248562e-03
+
+ -1.0874299705028534e-01 4.8125401139259338e-01
+ <_>
+
+ 0 -1 1296 5.9599999338388443e-02
+
+ -4.9522001296281815e-02 -2.0117089748382568e+00
+ <_>
+
+ 0 -1 1297 6.9340001791715622e-03
+
+ 1.5037499368190765e-01 -1.1271899938583374e-01
+ <_>
+
+ 0 -1 1298 1.5757000073790550e-02
+
+ -2.0885000005364418e-02 -1.1651979684829712e+00
+ <_>
+
+ 0 -1 1299 -4.9690000712871552e-02
+
+ -8.0213499069213867e-01 1.4372299611568451e-01
+ <_>
+
+ 0 -1 1300 5.2347000688314438e-02
+
+ -2.0836700499057770e-01 6.1677598953247070e-01
+ <_>
+
+ 0 -1 1301 2.2430999204516411e-02
+
+ 2.0305900275707245e-01 -7.5326198339462280e-01
+ <_>
+
+ 0 -1 1302 4.1142001748085022e-02
+
+ -1.8118199706077576e-01 1.0033359527587891e+00
+ <_>
+
+ 0 -1 1303 -2.1632000803947449e-02
+
+ 4.9998998641967773e-01 -3.4662999212741852e-02
+ <_>
+
+ 0 -1 1304 -8.2808002829551697e-02
+
+ 1.1711900234222412e+00 -1.8433600664138794e-01
+ <_>
+
+ 0 -1 1305 8.5060000419616699e-03
+
+ -6.3225001096725464e-02 2.9024899005889893e-01
+ <_>
+
+ 0 -1 1306 7.8905001282691956e-02
+
+ -2.3274500668048859e-01 5.9695798158645630e-01
+ <_>
+
+ 0 -1 1307 -9.0207003057003021e-02
+
+ -8.2211899757385254e-01 1.7772200703620911e-01
+ <_>
+
+ 0 -1 1308 -2.9269000515341759e-02
+
+ 6.0860699415206909e-01 -2.1468900144100189e-01
+ <_>
+
+ 0 -1 1309 6.9499998353421688e-03
+
+ -4.2665999382734299e-02 6.0512101650238037e-01
+ <_>
+
+ 0 -1 1310 -8.0629996955394745e-03
+
+ -1.1508270502090454e+00 -2.7286000549793243e-02
+ <_>
+
+ 0 -1 1311 1.9595999270677567e-02
+
+ -9.1880001127719879e-03 5.6857800483703613e-01
+ <_>
+
+ 0 -1 1312 -1.4884999953210354e-02
+
+ 3.7658798694610596e-01 -2.7149501442909241e-01
+ <_>
+
+ 0 -1 1313 2.5217000395059586e-02
+
+ -9.9991001188755035e-02 2.4664700031280518e-01
+ <_>
+
+ 0 -1 1314 -1.5855999663472176e-02
+
+ 6.6826701164245605e-01 -2.0614700019359589e-01
+ <_>
+
+ 0 -1 1315 2.9441000893712044e-02
+
+ 1.5832200646400452e-01 -7.6060897111892700e-01
+ <_>
+
+ 0 -1 1316 -8.5279997438192368e-03
+
+ 3.8212299346923828e-01 -2.5407800078392029e-01
+ <_>
+
+ 0 -1 1317 2.4421999230980873e-02
+
+ 1.5105099976062775e-01 -2.8752899169921875e-01
+ <_>
+
+ 0 -1 1318 -3.3886998891830444e-02
+
+ -6.8002802133560181e-01 3.4327000379562378e-02
+ <_>
+
+ 0 -1 1319 -2.0810000132769346e-03
+
+ 2.5413900613784790e-01 -2.6859098672866821e-01
+ <_>
+
+ 0 -1 1320 3.0358999967575073e-02
+
+ -3.0842000618577003e-02 -1.1476809978485107e+00
+ <_>
+
+ 0 -1 1321 4.0210001170635223e-03
+
+ -3.5253798961639404e-01 2.9868099093437195e-01
+ <_>
+
+ 0 -1 1322 2.7681000530719757e-02
+
+ -3.8148999214172363e-02 -1.3262039422988892e+00
+ <_>
+
+ 0 -1 1323 7.9039996489882469e-03
+
+ -2.3737000301480293e-02 7.0503002405166626e-01
+ <_>
+
+ 0 -1 1324 4.4031001627445221e-02
+
+ 1.0674899816513062e-01 -4.5261201262474060e-01
+ <_>
+
+ 0 -1 1325 -3.2370999455451965e-02
+
+ 4.6674901247024536e-01 -6.1546999961137772e-02
+ <_>
+
+ 0 -1 1326 2.0933000370860100e-02
+
+ -2.8447899222373962e-01 4.3845599889755249e-01
+ <_>
+
+ 0 -1 1327 2.5227999314665794e-02
+
+ -2.2537000477313995e-02 7.0389097929000854e-01
+ <_>
+
+ 0 -1 1328 6.5520000644028187e-03
+
+ -3.2554900646209717e-01 2.4023699760437012e-01
+ <_>
+
+ 0 -1 1329 -5.8557998389005661e-02
+
+ -1.2227720022201538e+00 1.1668799817562103e-01
+ <_>
+
+ 0 -1 1330 3.1899999827146530e-02
+
+ -1.9305000081658363e-02 -1.0973169803619385e+00
+ <_>
+
+ 0 -1 1331 -3.0445000156760216e-02
+
+ 6.5582501888275146e-01 7.5090996921062469e-02
+ <_>
+
+ 0 -1 1332 1.4933000318706036e-02
+
+ -5.2155798673629761e-01 1.1523099988698959e-01
+ <_>
+
+ 0 -1 1333 -4.9008000642061234e-02
+
+ -7.8303998708724976e-01 1.6657200455665588e-01
+ <_>
+
+ 0 -1 1334 8.3158999681472778e-02
+
+ -2.6879999786615372e-03 -8.5282301902770996e-01
+ <_>
+
+ 0 -1 1335 2.3902999237179756e-02
+
+ -5.1010999828577042e-02 4.1999098658561707e-01
+ <_>
+
+ 0 -1 1336 1.6428999602794647e-02
+
+ 1.9232999533414841e-02 -6.5049099922180176e-01
+ <_>
+
+ 0 -1 1337 -1.1838000267744064e-02
+
+ -6.2409800291061401e-01 1.5411199629306793e-01
+ <_>
+
+ 0 -1 1338 -1.6799999866634607e-04
+
+ 1.7589199542999268e-01 -3.4338700771331787e-01
+ <_>
+
+ 0 -1 1339 1.9193999469280243e-02
+
+ 4.3418999761343002e-02 7.9069197177886963e-01
+ <_>
+
+ 0 -1 1340 -1.0032000020146370e-02
+
+ 4.5648899674415588e-01 -2.2494800388813019e-01
+ <_>
+
+ 0 -1 1341 -1.4004000462591648e-02
+
+ 3.3570998907089233e-01 -4.8799999058246613e-03
+ <_>
+
+ 0 -1 1342 -1.0319899767637253e-01
+
+ -2.3378000259399414e+00 -5.8933001011610031e-02
+ <_>
+
+ 0 -1 1343 -9.5697000622749329e-02
+
+ -6.6153901815414429e-01 2.0098599791526794e-01
+ <_>
+
+ 0 -1 1344 -4.1480999439954758e-02
+
+ 4.5939201116561890e-01 -2.2314099967479706e-01
+ <_>
+
+ 0 -1 1345 2.4099999573081732e-03
+
+ -2.6898598670959473e-01 2.4922999739646912e-01
+ <_>
+
+ 0 -1 1346 1.0724999755620956e-01
+
+ -1.8640199303627014e-01 7.2769802808761597e-01
+ <_>
+
+ 0 -1 1347 3.1870000530034304e-03
+
+ -2.4608999490737915e-02 2.8643900156021118e-01
+ <_>
+
+ 0 -1 1348 2.9167000204324722e-02
+
+ -3.4683000296354294e-02 -1.1162580251693726e+00
+ <_>
+
+ 0 -1 1349 1.1287000030279160e-02
+
+ 6.3760001212358475e-03 6.6632097959518433e-01
+ <_>
+
+ 0 -1 1350 -1.2001000344753265e-02
+
+ 4.2420101165771484e-01 -2.6279801130294800e-01
+ <_>
+
+ 0 -1 1351 -1.2695999816060066e-02
+
+ -2.1957000717520714e-02 1.8936799466609955e-01
+ <_>
+
+ 0 -1 1352 2.4597000330686569e-02
+
+ -3.4963998943567276e-02 -1.0989320278167725e+00
+ <_>
+
+ 0 -1 1353 4.5953001827001572e-02
+
+ 1.1109799891710281e-01 -2.9306049346923828e+00
+ <_>
+
+ 0 -1 1354 -2.7241000905632973e-02
+
+ 2.9101699590682983e-01 -2.7407899498939514e-01
+ <_>
+
+ 0 -1 1355 4.0063999593257904e-02
+
+ 1.1877900362014771e-01 -6.2801802158355713e-01
+ <_>
+
+ 0 -1 1356 2.3055000230669975e-02
+
+ 1.4813800156116486e-01 -3.7007498741149902e-01
+ <_>
+
+ 0 -1 1357 -2.3737000301480293e-02
+
+ -5.3724801540374756e-01 1.9358199834823608e-01
+ <_>
+
+ 0 -1 1358 7.7522002160549164e-02
+
+ -6.0194000601768494e-02 -1.9489669799804688e+00
+ <_>
+
+ 0 -1 1359 -1.3345000334084034e-02
+
+ -4.5229598879814148e-01 1.8741500377655029e-01
+ <_>
+
+ 0 -1 1360 -2.1719999611377716e-02
+
+ 1.2144249677658081e+00 -1.5365800261497498e-01
+ <_>
+
+ 0 -1 1361 -7.1474999189376831e-02
+
+ -2.3047130107879639e+00 1.0999900102615356e-01
+ <_>
+
+ 0 -1 1362 -5.4999999701976776e-03
+
+ -7.1855199337005615e-01 2.0100999623537064e-02
+ <_>
+
+ 0 -1 1363 2.6740999892354012e-02
+
+ 7.3545001447200775e-02 9.8786002397537231e-01
+ <_>
+
+ 0 -1 1364 -3.9407998323440552e-02
+
+ -1.2227380275726318e+00 -4.3506998568773270e-02
+ <_>
+
+ 0 -1 1365 2.5888999924063683e-02
+
+ 1.3409300148487091e-01 -1.1770780086517334e+00
+ <_>
+
+ 0 -1 1366 4.8925001174211502e-02
+
+ -3.0810000374913216e-02 -9.3479502201080322e-01
+ <_>
+
+ 0 -1 1367 3.6892998963594437e-02
+
+ 1.3333700597286224e-01 -1.4998290538787842e+00
+ <_>
+
+ 0 -1 1368 7.8929997980594635e-02
+
+ -1.4538800716400146e-01 1.5631790161132812e+00
+ <_>
+
+ 0 -1 1369 2.9006000608205795e-02
+
+ 1.9383700191974640e-01 -6.7642802000045776e-01
+ <_>
+
+ 0 -1 1370 6.3089998438954353e-03
+
+ -3.7465399503707886e-01 1.0857500135898590e-01
+ <_>
+
+ 0 -1 1371 -6.5830998122692108e-02
+
+ 8.1059402227401733e-01 3.0201999470591545e-02
+ <_>
+
+ 0 -1 1372 -6.8965002894401550e-02
+
+ 8.3772599697113037e-01 -1.7140999436378479e-01
+ <_>
+
+ 0 -1 1373 -1.1669100075960159e-01
+
+ -9.4647198915481567e-01 1.3123199343681335e-01
+ <_>
+
+ 0 -1 1374 -1.3060000492259860e-03
+
+ 4.6007998287677765e-02 -5.2011597156524658e-01
+ <_>
+
+ 0 -1 1375 -4.4558998197317123e-02
+
+ -1.9423669576644897e+00 1.3200700283050537e-01
+ <_>
+
+ 0 -1 1376 5.1033001393079758e-02
+
+ -2.1480999886989594e-01 4.8673900961875916e-01
+ <_>
+
+ 0 -1 1377 -3.1578000634908676e-02
+
+ 5.9989798069000244e-01 7.9159997403621674e-03
+ <_>
+
+ 0 -1 1378 2.1020000800490379e-02
+
+ -2.2069500386714935e-01 5.4046201705932617e-01
+ <_>
+
+ 0 -1 1379 -1.3824200630187988e-01
+
+ 6.2957501411437988e-01 -2.1712999790906906e-02
+ <_>
+
+ 0 -1 1380 5.2228998392820358e-02
+
+ -2.3360900580883026e-01 4.9760800600051880e-01
+ <_>
+
+ 0 -1 1381 2.5884000584483147e-02
+
+ 1.8041999638080597e-01 -2.2039200365543365e-01
+ <_>
+
+ 0 -1 1382 -1.2138999998569489e-02
+
+ -6.9731897115707397e-01 1.5712000429630280e-02
+ <_>
+
+ 0 -1 1383 -2.4237999692559242e-02
+
+ 3.4593299031257629e-01 7.1469999849796295e-02
+ <_>
+
+ 0 -1 1384 -2.5272000581026077e-02
+
+ -8.7583297491073608e-01 -9.8240002989768982e-03
+ <_>
+
+ 0 -1 1385 1.2597000226378441e-02
+
+ 2.3649999499320984e-01 -2.8731200098991394e-01
+ <_>
+
+ 0 -1 1386 5.7330999523401260e-02
+
+ -6.1530999839305878e-02 -2.2326040267944336e+00
+ <_>
+
+ 0 -1 1387 1.6671000048518181e-02
+
+ -1.9850100576877594e-01 4.0810701251029968e-01
+ <_>
+
+ 0 -1 1388 -2.2818999364972115e-02
+
+ 9.6487599611282349e-01 -2.0245699584484100e-01
+ <_>
+
+ 0 -1 1389 3.7000001611886546e-05
+
+ -5.8908998966217041e-02 2.7055400609970093e-01
+ <_>
+
+ 0 -1 1390 -7.6700001955032349e-03
+
+ -4.5317101478576660e-01 8.9628003537654877e-02
+ <_>
+
+ 0 -1 1391 9.4085998833179474e-02
+
+ 1.1604599654674530e-01 -1.0951169729232788e+00
+ <_>
+
+ 0 -1 1392 -6.2267001718282700e-02
+
+ 1.8096530437469482e+00 -1.4773200452327728e-01
+ <_>
+
+ 0 -1 1393 1.7416000366210938e-02
+
+ 2.3068200051784515e-01 -4.2417600750923157e-01
+ <_>
+
+ 0 -1 1394 -2.2066000849008560e-02
+
+ 4.9270299077033997e-01 -2.0630900561809540e-01
+ <_>
+
+ 0 -1 1395 -1.0404000058770180e-02
+
+ 6.0924297571182251e-01 2.8130000457167625e-02
+ <_>
+
+ 0 -1 1396 -9.3670003116130829e-03
+
+ 4.0171200037002563e-01 -2.1681700646877289e-01
+ <_>
+
+ 0 -1 1397 -2.9039999470114708e-02
+
+ -8.4876501560211182e-01 1.4246800541877747e-01
+ <_>
+
+ 0 -1 1398 -2.1061999723315239e-02
+
+ -7.9198300838470459e-01 -1.2595999985933304e-02
+ <_>
+
+ 0 -1 1399 -3.7000998854637146e-02
+
+ -6.7488902807235718e-01 1.2830400466918945e-01
+ <_>
+
+ 0 -1 1400 1.0735999792814255e-02
+
+ 3.6779999732971191e-02 -6.3393002748489380e-01
+ <_>
+
+ 0 -1 1401 1.6367599368095398e-01
+
+ 1.3803899288177490e-01 -4.7189000248908997e-01
+ <_>
+
+ 0 -1 1402 9.4917997717857361e-02
+
+ -1.3855700194835663e-01 1.9492419958114624e+00
+ <_>
+
+ 0 -1 1403 3.5261999815702438e-02
+
+ 1.3721899688243866e-01 -2.1186530590057373e+00
+ <_>
+
+ 0 -1 1404 1.2811000458896160e-02
+
+ -2.0008100569248199e-01 4.9507799744606018e-01
+ <_>
+ 155
+ -3.3933560848236084e+00
+
+ <_>
+
+ 0 -1 1405 1.3904400169849396e-01
+
+ -4.6581199765205383e-01 7.6431602239608765e-01
+ <_>
+
+ 0 -1 1406 1.1916999705135822e-02
+
+ -9.4398999214172363e-01 3.9726299047470093e-01
+ <_>
+
+ 0 -1 1407 -1.0006999596953392e-02
+
+ 3.2718798518180847e-01 -6.3367402553558350e-01
+ <_>
+
+ 0 -1 1408 -6.0479999519884586e-03
+
+ 2.7427899837493896e-01 -5.7446998357772827e-01
+ <_>
+
+ 0 -1 1409 -1.2489999644458294e-03
+
+ 2.3629300296306610e-01 -6.8593502044677734e-01
+ <_>
+
+ 0 -1 1410 3.2382000237703323e-02
+
+ -5.7630199193954468e-01 2.7492699027061462e-01
+ <_>
+
+ 0 -1 1411 -1.3957999646663666e-02
+
+ -6.1061501502990723e-01 2.4541600048542023e-01
+ <_>
+
+ 0 -1 1412 1.1159999994561076e-03
+
+ -5.6539100408554077e-01 2.7179300785064697e-01
+ <_>
+
+ 0 -1 1413 2.7000000045518391e-05
+
+ -8.0235999822616577e-01 1.1509100347757339e-01
+ <_>
+
+ 0 -1 1414 -2.5700000696815550e-04
+
+ -8.1205898523330688e-01 2.3844699561595917e-01
+ <_>
+
+ 0 -1 1415 4.0460000745952129e-03
+
+ 1.3909600675106049e-01 -6.6163200139999390e-01
+ <_>
+
+ 0 -1 1416 1.4356000348925591e-02
+
+ -1.6485199332237244e-01 4.1901698708534241e-01
+ <_>
+
+ 0 -1 1417 -5.5374998599290848e-02
+
+ 1.4425870180130005e+00 -1.8820199370384216e-01
+ <_>
+
+ 0 -1 1418 9.3594998121261597e-02
+
+ 1.3548299670219421e-01 -9.1636097431182861e-01
+ <_>
+
+ 0 -1 1419 2.6624999940395355e-02
+
+ -3.3748298883438110e-01 3.9233601093292236e-01
+ <_>
+
+ 0 -1 1420 3.7469998933374882e-03
+
+ -1.1615400016307831e-01 4.4399300217628479e-01
+ <_>
+
+ 0 -1 1421 -3.1886000186204910e-02
+
+ -9.9498301744461060e-01 1.6120000509545207e-03
+ <_>
+
+ 0 -1 1422 -2.2600000724196434e-02
+
+ -4.8067399859428406e-01 1.7007300257682800e-01
+ <_>
+
+ 0 -1 1423 2.5202000513672829e-02
+
+ 3.5580001771450043e-02 -8.0215400457382202e-01
+ <_>
+
+ 0 -1 1424 -3.1036999076604843e-02
+
+ -1.0895340442657471e+00 1.8081900477409363e-01
+ <_>
+
+ 0 -1 1425 -2.6475999504327774e-02
+
+ 9.5671200752258301e-01 -2.1049399673938751e-01
+ <_>
+
+ 0 -1 1426 -1.3853999786078930e-02
+
+ -1.0370320081710815e+00 2.2166700661182404e-01
+ <_>
+
+ 0 -1 1427 -6.2925003468990326e-02
+
+ 9.0199398994445801e-01 -1.9085299968719482e-01
+ <_>
+
+ 0 -1 1428 -4.4750999659299850e-02
+
+ -1.0119110345840454e+00 1.4691199362277985e-01
+ <_>
+
+ 0 -1 1429 -2.0428000018000603e-02
+
+ 6.1624497175216675e-01 -2.3552699387073517e-01
+ <_>
+
+ 0 -1 1430 -8.0329999327659607e-03
+
+ -8.3279997110366821e-02 2.1728700399398804e-01
+ <_>
+
+ 0 -1 1431 8.7280003353953362e-03
+
+ 6.5458998084068298e-02 -6.0318702459335327e-01
+ <_>
+
+ 0 -1 1432 -2.7202000841498375e-02
+
+ -9.3447399139404297e-01 1.5270000696182251e-01
+ <_>
+
+ 0 -1 1433 -1.6471000388264656e-02
+
+ -8.4177100658416748e-01 1.3332000002264977e-02
+ <_>
+
+ 0 -1 1434 -1.3744000345468521e-02
+
+ 6.0567200183868408e-01 -9.2021003365516663e-02
+ <_>
+
+ 0 -1 1435 2.9164999723434448e-02
+
+ -2.8114000335335732e-02 -1.4014569520950317e+00
+ <_>
+
+ 0 -1 1436 3.7457000464200974e-02
+
+ 1.3080599904060364e-01 -4.9382498860359192e-01
+ <_>
+
+ 0 -1 1437 -2.5070000439882278e-02
+
+ -1.1289390325546265e+00 -1.4600000344216824e-02
+ <_>
+
+ 0 -1 1438 -6.3812002539634705e-02
+
+ 7.5871598720550537e-01 -1.8200000049546361e-03
+ <_>
+
+ 0 -1 1439 -9.3900002539157867e-03
+
+ 2.9936400055885315e-01 -2.9487800598144531e-01
+ <_>
+
+ 0 -1 1440 -7.6000002445653081e-04
+
+ 1.9725000485777855e-02 1.9993899762630463e-01
+ <_>
+
+ 0 -1 1441 -2.1740999072790146e-02
+
+ -8.5247898101806641e-01 4.9169998615980148e-02
+ <_>
+
+ 0 -1 1442 -1.7869999632239342e-02
+
+ -5.9985999017953873e-02 1.5222500264644623e-01
+ <_>
+
+ 0 -1 1443 -2.4831000715494156e-02
+
+ 3.5603401064872742e-01 -2.6259899139404297e-01
+ <_>
+
+ 0 -1 1444 1.5715500712394714e-01
+
+ 1.5599999460391700e-04 1.0428730249404907e+00
+ <_>
+
+ 0 -1 1445 6.9026999175548553e-02
+
+ -3.3006999641656876e-02 -1.1796669960021973e+00
+ <_>
+
+ 0 -1 1446 -1.1021999642252922e-02
+
+ 5.8987700939178467e-01 -5.7647999376058578e-02
+ <_>
+
+ 0 -1 1447 -1.3834999874234200e-02
+
+ 5.9502798318862915e-01 -2.4418599903583527e-01
+ <_>
+
+ 0 -1 1448 -3.0941000208258629e-02
+
+ -1.1723799705505371e+00 1.6907000541687012e-01
+ <_>
+
+ 0 -1 1449 2.1258000284433365e-02
+
+ -1.8900999799370766e-02 -1.0684759616851807e+00
+ <_>
+
+ 0 -1 1450 9.3079999089241028e-02
+
+ 1.6305600106716156e-01 -1.3375270366668701e+00
+ <_>
+
+ 0 -1 1451 2.9635999351739883e-02
+
+ -2.2524799406528473e-01 4.5400100946426392e-01
+ <_>
+
+ 0 -1 1452 -1.2199999764561653e-04
+
+ 2.7409100532531738e-01 -3.7371399998664856e-01
+ <_>
+
+ 0 -1 1453 -4.2098000645637512e-02
+
+ -7.5828802585601807e-01 1.7137000337243080e-02
+ <_>
+
+ 0 -1 1454 -2.2505000233650208e-02
+
+ -2.2759300470352173e-01 2.3698699474334717e-01
+ <_>
+
+ 0 -1 1455 -1.2862999923527241e-02
+
+ 1.9252400100231171e-01 -3.2127100229263306e-01
+ <_>
+
+ 0 -1 1456 2.7860000729560852e-02
+
+ 1.6723699867725372e-01 -1.0209059715270996e+00
+ <_>
+
+ 0 -1 1457 -2.7807999402284622e-02
+
+ 1.2824759483337402e+00 -1.7225299775600433e-01
+ <_>
+
+ 0 -1 1458 -6.1630001291632652e-03
+
+ -5.4072898626327515e-01 2.3885700106620789e-01
+ <_>
+
+ 0 -1 1459 -2.0436000078916550e-02
+
+ 6.3355398178100586e-01 -2.1090599894523621e-01
+ <_>
+
+ 0 -1 1460 -1.2307999655604362e-02
+
+ -4.9778199195861816e-01 1.7402599751949310e-01
+ <_>
+
+ 0 -1 1461 -4.0493998676538467e-02
+
+ -1.1848740577697754e+00 -3.3890999853610992e-02
+ <_>
+
+ 0 -1 1462 2.9657000675797462e-02
+
+ 2.1740999072790146e-02 1.0069919824600220e+00
+ <_>
+
+ 0 -1 1463 6.8379999138414860e-03
+
+ 2.9217999428510666e-02 -5.9906297922134399e-01
+ <_>
+
+ 0 -1 1464 1.6164999455213547e-02
+
+ -2.1000799536705017e-01 3.7637299299240112e-01
+ <_>
+
+ 0 -1 1465 5.0193000584840775e-02
+
+ 2.5319999549537897e-03 -7.1668201684951782e-01
+ <_>
+
+ 0 -1 1466 1.9680000841617584e-03
+
+ -2.1921400725841522e-01 3.2298699021339417e-01
+ <_>
+
+ 0 -1 1467 2.4979999288916588e-02
+
+ -9.6840001642704010e-03 -7.7572900056838989e-01
+ <_>
+
+ 0 -1 1468 -1.5809999778866768e-02
+
+ 4.4637501239776611e-01 -6.1760000884532928e-02
+ <_>
+
+ 0 -1 1469 3.7206999957561493e-02
+
+ -2.0495399832725525e-01 5.7722198963165283e-01
+ <_>
+
+ 0 -1 1470 -7.9264998435974121e-02
+
+ -7.6745402812957764e-01 1.2550400197505951e-01
+ <_>
+
+ 0 -1 1471 -1.7152000218629837e-02
+
+ -1.4121830463409424e+00 -5.1704000681638718e-02
+ <_>
+
+ 0 -1 1472 3.2740000635385513e-02
+
+ 1.9334000349044800e-01 -6.3633698225021362e-01
+ <_>
+
+ 0 -1 1473 -1.1756999790668488e-01
+
+ 8.4325402975082397e-01 -1.8018600344657898e-01
+ <_>
+
+ 0 -1 1474 1.2057200074195862e-01
+
+ 1.2530000507831573e-01 -2.1213600635528564e+00
+ <_>
+
+ 0 -1 1475 4.2779999785125256e-03
+
+ -4.6604400873184204e-01 8.9643999934196472e-02
+ <_>
+
+ 0 -1 1476 -7.2544999420642853e-02
+
+ 5.1826500892639160e-01 1.6823999583721161e-02
+ <_>
+
+ 0 -1 1477 1.7710599303245544e-01
+
+ -3.0910000205039978e-02 -1.1046639680862427e+00
+ <_>
+
+ 0 -1 1478 8.4229996427893639e-03
+
+ 2.4445800483226776e-01 -3.8613098859786987e-01
+ <_>
+
+ 0 -1 1479 -1.3035000301897526e-02
+
+ 9.8004400730133057e-01 -1.7016500234603882e-01
+ <_>
+
+ 0 -1 1480 1.8912000581622124e-02
+
+ 2.0248499512672424e-01 -3.8545900583267212e-01
+ <_>
+
+ 0 -1 1481 2.1447999402880669e-02
+
+ -2.5717198848724365e-01 3.5181200504302979e-01
+ <_>
+
+ 0 -1 1482 6.3357003033161163e-02
+
+ 1.6994799673557281e-01 -9.1383802890777588e-01
+ <_>
+
+ 0 -1 1483 -3.2435998320579529e-02
+
+ -8.5681599378585815e-01 -2.1680999547243118e-02
+ <_>
+
+ 0 -1 1484 -2.3564999923110008e-02
+
+ 5.6115597486495972e-01 -2.2400000307243317e-04
+ <_>
+
+ 0 -1 1485 1.8789000809192657e-02
+
+ -2.5459799170494080e-01 3.4512901306152344e-01
+ <_>
+
+ 0 -1 1486 3.1042000278830528e-02
+
+ 7.5719999149441719e-03 3.4800198674201965e-01
+ <_>
+
+ 0 -1 1487 -1.1226999573409557e-02
+
+ -6.0219800472259521e-01 4.2814999818801880e-02
+ <_>
+
+ 0 -1 1488 -1.2845999561250210e-02
+
+ 4.2020401358604431e-01 -5.3801000118255615e-02
+ <_>
+
+ 0 -1 1489 -1.2791999615728855e-02
+
+ 2.2724500298500061e-01 -3.2398000359535217e-01
+ <_>
+
+ 0 -1 1490 6.8651996552944183e-02
+
+ 9.3532003462314606e-02 10.
+ <_>
+
+ 0 -1 1491 5.2789999172091484e-03
+
+ -2.6926299929618835e-01 3.3303201198577881e-01
+ <_>
+
+ 0 -1 1492 -3.8779001682996750e-02
+
+ -7.2365301847457886e-01 1.7806500196456909e-01
+ <_>
+
+ 0 -1 1493 6.1820000410079956e-03
+
+ -3.5119399428367615e-01 1.6586300730705261e-01
+ <_>
+
+ 0 -1 1494 1.7515200376510620e-01
+
+ 1.1623100191354752e-01 -1.5419290065765381e+00
+ <_>
+
+ 0 -1 1495 1.1627999693155289e-01
+
+ -9.1479998081922531e-03 -9.9842602014541626e-01
+ <_>
+
+ 0 -1 1496 -2.2964000701904297e-02
+
+ 2.0565399527549744e-01 1.5432000160217285e-02
+ <_>
+
+ 0 -1 1497 -5.1410000771284103e-02
+
+ 5.8072400093078613e-01 -2.0118400454521179e-01
+ <_>
+
+ 0 -1 1498 2.2474199533462524e-01
+
+ 1.8728999421000481e-02 1.0829299688339233e+00
+ <_>
+
+ 0 -1 1499 9.4860000535845757e-03
+
+ -3.3171299099922180e-01 1.9902999699115753e-01
+ <_>
+
+ 0 -1 1500 -1.1846300214529037e-01
+
+ 1.3711010217666626e+00 6.8926997482776642e-02
+ <_>
+
+ 0 -1 1501 3.7810999900102615e-02
+
+ -9.3600002583116293e-04 -8.3996999263763428e-01
+ <_>
+
+ 0 -1 1502 2.2202000021934509e-02
+
+ -1.1963999830186367e-02 3.6673998832702637e-01
+ <_>
+
+ 0 -1 1503 -3.6366000771522522e-02
+
+ 3.7866500020027161e-01 -2.7714800834655762e-01
+ <_>
+
+ 0 -1 1504 -1.3184699416160583e-01
+
+ -2.7481179237365723e+00 1.0666900128126144e-01
+ <_>
+
+ 0 -1 1505 -4.1655998677015305e-02
+
+ 4.7524300217628479e-01 -2.3249800503253937e-01
+ <_>
+
+ 0 -1 1506 -3.3151999115943909e-02
+
+ -5.7929402589797974e-01 1.7434400320053101e-01
+ <_>
+
+ 0 -1 1507 1.5769999474287033e-02
+
+ -1.1284000240266323e-02 -8.3701401948928833e-01
+ <_>
+
+ 0 -1 1508 -3.9363000541925430e-02
+
+ 3.4821599721908569e-01 -1.7455400526523590e-01
+ <_>
+
+ 0 -1 1509 -6.7849002778530121e-02
+
+ 1.4225699901580811e+00 -1.4765599370002747e-01
+ <_>
+
+ 0 -1 1510 -2.6775000616908073e-02
+
+ 2.3947000503540039e-01 1.3271999545395374e-02
+ <_>
+
+ 0 -1 1511 3.9919000118970871e-02
+
+ -8.9999996125698090e-03 -7.5938898324966431e-01
+ <_>
+
+ 0 -1 1512 1.0065600275993347e-01
+
+ -1.8685000017285347e-02 7.6245301961898804e-01
+ <_>
+
+ 0 -1 1513 -8.1022001802921295e-02
+
+ -9.0439099073410034e-01 -8.5880002006888390e-03
+ <_>
+
+ 0 -1 1514 -2.1258000284433365e-02
+
+ -2.1319599449634552e-01 2.1919700503349304e-01
+ <_>
+
+ 0 -1 1515 -1.0630999691784382e-02
+
+ 1.9598099589347839e-01 -3.5768100619316101e-01
+ <_>
+
+ 0 -1 1516 8.1300002057105303e-04
+
+ -9.2794999480247498e-02 2.6145899295806885e-01
+ <_>
+
+ 0 -1 1517 3.4650000743567944e-03
+
+ -5.5336099863052368e-01 2.7386000379920006e-02
+ <_>
+
+ 0 -1 1518 1.8835999071598053e-02
+
+ 1.8446099758148193e-01 -6.6934299468994141e-01
+ <_>
+
+ 0 -1 1519 -2.5631999596953392e-02
+
+ 1.9382879734039307e+00 -1.4708900451660156e-01
+ <_>
+
+ 0 -1 1520 -4.0939999744296074e-03
+
+ -2.6451599597930908e-01 2.0733200013637543e-01
+ <_>
+
+ 0 -1 1521 -8.9199998183175921e-04
+
+ -5.5031597614288330e-01 5.0374999642372131e-02
+ <_>
+
+ 0 -1 1522 -4.9518000334501266e-02
+
+ -2.5615389347076416e+00 1.3141700625419617e-01
+ <_>
+
+ 0 -1 1523 1.1680999770760536e-02
+
+ -2.4819800257682800e-01 3.9982700347900391e-01
+ <_>
+
+ 0 -1 1524 3.4563999623060226e-02
+
+ 1.6178800165653229e-01 -7.1418899297714233e-01
+ <_>
+
+ 0 -1 1525 -8.2909995689988136e-03
+
+ 2.2180099785327911e-01 -2.9181700944900513e-01
+ <_>
+
+ 0 -1 1526 -2.2358000278472900e-02
+
+ 3.1044098734855652e-01 -2.7280000504106283e-03
+ <_>
+
+ 0 -1 1527 -3.0801000073552132e-02
+
+ -9.5672702789306641e-01 -8.3400001749396324e-03
+ <_>
+
+ 0 -1 1528 4.3779000639915466e-02
+
+ 1.2556900084018707e-01 -1.1759619712829590e+00
+ <_>
+
+ 0 -1 1529 4.3046001344919205e-02
+
+ -5.8876998722553253e-02 -1.8568470478057861e+00
+ <_>
+
+ 0 -1 1530 2.7188999578356743e-02
+
+ 4.2858000844717026e-02 3.9036700129508972e-01
+ <_>
+
+ 0 -1 1531 9.4149997457861900e-03
+
+ -4.3567001819610596e-02 -1.1094470024108887e+00
+ <_>
+
+ 0 -1 1532 9.4311997294425964e-02
+
+ 4.0256999433040619e-02 9.8442298173904419e-01
+ <_>
+
+ 0 -1 1533 1.7025099694728851e-01
+
+ 2.9510000720620155e-02 -6.9509297609329224e-01
+ <_>
+
+ 0 -1 1534 -4.7148000448942184e-02
+
+ 1.0338569879531860e+00 6.7602001130580902e-02
+ <_>
+
+ 0 -1 1535 1.1186300218105316e-01
+
+ -6.8682998418807983e-02 -2.4985830783843994e+00
+ <_>
+
+ 0 -1 1536 -1.4353999868035316e-02
+
+ -5.9481900930404663e-01 1.5001699328422546e-01
+ <_>
+
+ 0 -1 1537 3.4024000167846680e-02
+
+ -6.4823001623153687e-02 -2.1382639408111572e+00
+ <_>
+
+ 0 -1 1538 2.1601999178528786e-02
+
+ 5.5309999734163284e-02 7.8292900323867798e-01
+ <_>
+
+ 0 -1 1539 2.1771999076008797e-02
+
+ -7.1279997937381268e-03 -7.2148102521896362e-01
+ <_>
+
+ 0 -1 1540 8.2416996359825134e-02
+
+ 1.4609499275684357e-01 -1.3636670112609863e+00
+ <_>
+
+ 0 -1 1541 8.4671996533870697e-02
+
+ -1.7784699797630310e-01 7.2857701778411865e-01
+ <_>
+
+ 0 -1 1542 -5.5128000676631927e-02
+
+ -5.9402400255203247e-01 1.9357800483703613e-01
+ <_>
+
+ 0 -1 1543 -6.4823001623153687e-02
+
+ -1.0783840417861938e+00 -4.0734000504016876e-02
+ <_>
+
+ 0 -1 1544 -2.2769000381231308e-02
+
+ 7.7900201082229614e-01 3.4960000775754452e-03
+ <_>
+
+ 0 -1 1545 5.4756000638008118e-02
+
+ -6.5683998167514801e-02 -1.8188409805297852e+00
+ <_>
+
+ 0 -1 1546 -8.9000001025851816e-05
+
+ -1.7891999334096909e-02 2.0768299698829651e-01
+ <_>
+
+ 0 -1 1547 9.8361998796463013e-02
+
+ -5.5946998298168182e-02 -1.4153920412063599e+00
+ <_>
+
+ 0 -1 1548 -7.0930002257227898e-03
+
+ 3.4135299921035767e-01 -1.2089899927377701e-01
+ <_>
+
+ 0 -1 1549 5.0278000533580780e-02
+
+ -2.6286700367927551e-01 2.5797298550605774e-01
+ <_>
+
+ 0 -1 1550 -5.7870000600814819e-03
+
+ -1.3178600370883942e-01 1.7350199818611145e-01
+ <_>
+
+ 0 -1 1551 1.3973999768495560e-02
+
+ 2.8518000617623329e-02 -6.1152201890945435e-01
+ <_>
+
+ 0 -1 1552 2.1449999883770943e-02
+
+ 2.6181999593973160e-02 3.0306598544120789e-01
+ <_>
+
+ 0 -1 1553 -2.9214000329375267e-02
+
+ 4.4940599799156189e-01 -2.2803099453449249e-01
+ <_>
+
+ 0 -1 1554 4.8099999548867345e-04
+
+ -1.9879999756813049e-01 2.0744499564170837e-01
+ <_>
+
+ 0 -1 1555 1.7109999898821115e-03
+
+ -5.4037201404571533e-01 6.7865997552871704e-02
+ <_>
+
+ 0 -1 1556 8.6660003289580345e-03
+
+ -1.3128000311553478e-02 5.2297902107238770e-01
+ <_>
+
+ 0 -1 1557 6.3657999038696289e-02
+
+ 6.8299002945423126e-02 -4.9235099554061890e-01
+ <_>
+
+ 0 -1 1558 -2.7968000620603561e-02
+
+ 6.8183898925781250e-01 7.8781001269817352e-02
+ <_>
+
+ 0 -1 1559 4.8953998833894730e-02
+
+ -2.0622399449348450e-01 5.0388097763061523e-01
+ <_>
+ 169
+ -3.2396929264068604e+00
+
+ <_>
+
+ 0 -1 1560 -2.9312999919056892e-02
+
+ 7.1284699440002441e-01 -5.8230698108673096e-01
+ <_>
+
+ 0 -1 1561 1.2415099889039993e-01
+
+ -3.6863499879837036e-01 6.0067200660705566e-01
+ <_>
+
+ 0 -1 1562 7.9349996522068977e-03
+
+ -8.6008298397064209e-01 2.1724699437618256e-01
+ <_>
+
+ 0 -1 1563 3.0365999788045883e-02
+
+ -2.7186998724937439e-01 6.1247897148132324e-01
+ <_>
+
+ 0 -1 1564 2.5218000635504723e-02
+
+ -3.4748300909996033e-01 5.0427699089050293e-01
+ <_>
+
+ 0 -1 1565 1.0014000348746777e-02
+
+ -3.1898999214172363e-01 4.1376799345016479e-01
+ <_>
+
+ 0 -1 1566 -1.6775000840425491e-02
+
+ -6.9048100709915161e-01 9.4830997288227081e-02
+ <_>
+
+ 0 -1 1567 -2.6950000319629908e-03
+
+ -2.0829799771308899e-01 2.3737199604511261e-01
+ <_>
+
+ 0 -1 1568 4.2257998138666153e-02
+
+ -4.9366700649261475e-01 1.8170599639415741e-01
+ <_>
+
+ 0 -1 1569 -4.8505000770092010e-02
+
+ 1.3429640531539917e+00 3.9769001305103302e-02
+ <_>
+
+ 0 -1 1570 2.8992999345064163e-02
+
+ 4.6496000140905380e-02 -8.1643497943878174e-01
+ <_>
+
+ 0 -1 1571 -4.0089000016450882e-02
+
+ -7.1197801828384399e-01 2.2553899884223938e-01
+ <_>
+
+ 0 -1 1572 -4.1021998971700668e-02
+
+ 1.0057929754257202e+00 -1.9690200686454773e-01
+ <_>
+
+ 0 -1 1573 1.1838000267744064e-02
+
+ -1.2600000016391277e-02 8.0767101049423218e-01
+ <_>
+
+ 0 -1 1574 -2.1328000351786613e-02
+
+ -8.2023900747299194e-01 2.0524999126791954e-02
+ <_>
+
+ 0 -1 1575 -2.3904999718070030e-02
+
+ 5.4210501909255981e-01 -7.4767000973224640e-02
+ <_>
+
+ 0 -1 1576 1.8008999526500702e-02
+
+ -3.3827701210975647e-01 4.2358601093292236e-01
+ <_>
+
+ 0 -1 1577 -4.3614000082015991e-02
+
+ -1.1983489990234375e+00 1.5566200017929077e-01
+ <_>
+
+ 0 -1 1578 -9.2449998483061790e-03
+
+ -8.9029997587203979e-01 1.1003999970853329e-02
+ <_>
+
+ 0 -1 1579 4.7485001385211945e-02
+
+ 1.6664099693298340e-01 -9.0764498710632324e-01
+ <_>
+
+ 0 -1 1580 -1.4233999885618687e-02
+
+ 6.2695199251174927e-01 -2.5791200995445251e-01
+ <_>
+
+ 0 -1 1581 3.8010000716894865e-03
+
+ -2.8229999542236328e-01 2.6624599099159241e-01
+ <_>
+
+ 0 -1 1582 3.4330000635236502e-03
+
+ -6.3771998882293701e-01 9.8422996699810028e-02
+ <_>
+
+ 0 -1 1583 -2.9221000149846077e-02
+
+ -7.6769900321960449e-01 2.2634500265121460e-01
+ <_>
+
+ 0 -1 1584 -6.4949998632073402e-03
+
+ 4.5600101351737976e-01 -2.6528900861740112e-01
+ <_>
+
+ 0 -1 1585 -3.0034000054001808e-02
+
+ -7.6551097631454468e-01 1.4009299874305725e-01
+ <_>
+
+ 0 -1 1586 7.8360000625252724e-03
+
+ 4.6755999326705933e-02 -7.2356200218200684e-01
+ <_>
+
+ 0 -1 1587 8.8550001382827759e-03
+
+ -4.9141999334096909e-02 5.1472699642181396e-01
+ <_>
+
+ 0 -1 1588 9.5973998308181763e-02
+
+ -2.0068999379873276e-02 -1.0850950479507446e+00
+ <_>
+
+ 0 -1 1589 -3.2876998186111450e-02
+
+ -9.5875298976898193e-01 1.4543600380420685e-01
+ <_>
+
+ 0 -1 1590 -1.3384000398218632e-02
+
+ -7.0013600587844849e-01 2.9157999902963638e-02
+ <_>
+
+ 0 -1 1591 1.5235999599099159e-02
+
+ -2.8235700726509094e-01 2.5367999076843262e-01
+ <_>
+
+ 0 -1 1592 1.2054000049829483e-02
+
+ -2.5303399562835693e-01 4.6526700258255005e-01
+ <_>
+
+ 0 -1 1593 -7.6295003294944763e-02
+
+ -6.9915801286697388e-01 1.3217200338840485e-01
+ <_>
+
+ 0 -1 1594 -1.2040000408887863e-02
+
+ 4.5894598960876465e-01 -2.3856499791145325e-01
+ <_>
+
+ 0 -1 1595 2.1916000172495842e-02
+
+ 1.8268600106239319e-01 -6.1629700660705566e-01
+ <_>
+
+ 0 -1 1596 -2.7330000884830952e-03
+
+ -6.3257902860641479e-01 3.4219000488519669e-02
+ <_>
+
+ 0 -1 1597 -4.8652000725269318e-02
+
+ -1.0297729969024658e+00 1.7386500537395477e-01
+ <_>
+
+ 0 -1 1598 -1.0463999584317207e-02
+
+ 3.4757301211357117e-01 -2.7464100718498230e-01
+ <_>
+
+ 0 -1 1599 -6.6550001502037048e-03
+
+ -2.8980299830436707e-01 2.4037900567054749e-01
+ <_>
+
+ 0 -1 1600 8.5469996556639671e-03
+
+ -4.4340500235557556e-01 1.4267399907112122e-01
+ <_>
+
+ 0 -1 1601 1.9913999363780022e-02
+
+ 1.7740400135517120e-01 -2.4096299707889557e-01
+ <_>
+
+ 0 -1 1602 2.2012999281287193e-02
+
+ -1.0812000371515751e-02 -9.4690799713134766e-01
+ <_>
+
+ 0 -1 1603 -5.2179001271724701e-02
+
+ 1.6547499895095825e+00 9.6487000584602356e-02
+ <_>
+
+ 0 -1 1604 1.9698999822139740e-02
+
+ -6.7560002207756042e-03 -8.6311501264572144e-01
+ <_>
+
+ 0 -1 1605 2.3040000349283218e-02
+
+ -2.3519999813288450e-03 3.8531300425529480e-01
+ <_>
+
+ 0 -1 1606 -1.5038000419735909e-02
+
+ -6.1905699968338013e-01 3.1077999621629715e-02
+ <_>
+
+ 0 -1 1607 -4.9956001341342926e-02
+
+ 7.0657497644424438e-01 4.7880999743938446e-02
+ <_>
+
+ 0 -1 1608 -6.9269999861717224e-02
+
+ 3.9212900400161743e-01 -2.3848000168800354e-01
+ <_>
+
+ 0 -1 1609 4.7399997711181641e-03
+
+ -2.4309000000357628e-02 2.5386300683021545e-01
+ <_>
+
+ 0 -1 1610 -3.3923998475074768e-02
+
+ 4.6930399537086487e-01 -2.3321899771690369e-01
+ <_>
+
+ 0 -1 1611 -1.6231000423431396e-02
+
+ 3.2319200038909912e-01 -2.0545600354671478e-01
+ <_>
+
+ 0 -1 1612 -5.0193000584840775e-02
+
+ -1.2277870178222656e+00 -4.0798000991344452e-02
+ <_>
+
+ 0 -1 1613 5.6944001466035843e-02
+
+ 4.5184001326560974e-02 6.0197502374649048e-01
+ <_>
+
+ 0 -1 1614 4.0936999022960663e-02
+
+ -1.6772800683975220e-01 8.9819300174713135e-01
+ <_>
+
+ 0 -1 1615 -3.0839999672025442e-03
+
+ 3.3716198801994324e-01 -2.7240800857543945e-01
+ <_>
+
+ 0 -1 1616 -3.2600000500679016e-02
+
+ -8.5446500778198242e-01 1.9664999097585678e-02
+ <_>
+
+ 0 -1 1617 9.8480999469757080e-02
+
+ 5.4742000997066498e-02 6.3827300071716309e-01
+ <_>
+
+ 0 -1 1618 -3.8185000419616699e-02
+
+ 5.2274698019027710e-01 -2.3384800553321838e-01
+ <_>
+
+ 0 -1 1619 -4.5917000621557236e-02
+
+ 6.2829202413558960e-01 3.2859001308679581e-02
+ <_>
+
+ 0 -1 1620 -1.1955499649047852e-01
+
+ -6.1572700738906860e-01 3.4680001437664032e-02
+ <_>
+
+ 0 -1 1621 -1.2044399976730347e-01
+
+ -8.4380000829696655e-01 1.6530700027942657e-01
+ <_>
+
+ 0 -1 1622 7.0619001984596252e-02
+
+ -6.3261002302169800e-02 -1.9863929748535156e+00
+ <_>
+
+ 0 -1 1623 8.4889996796846390e-03
+
+ -1.7663399875164032e-01 3.8011199235916138e-01
+ <_>
+
+ 0 -1 1624 2.2710999473929405e-02
+
+ -2.7605999261140823e-02 -9.1921401023864746e-01
+ <_>
+
+ 0 -1 1625 4.9700000090524554e-04
+
+ -2.4293200671672821e-01 2.2878900170326233e-01
+ <_>
+
+ 0 -1 1626 3.4651998430490494e-02
+
+ -2.3705999553203583e-01 5.4010999202728271e-01
+ <_>
+
+ 0 -1 1627 -4.4700000435113907e-03
+
+ 3.9078998565673828e-01 -1.2693800032138824e-01
+ <_>
+
+ 0 -1 1628 2.3643000051379204e-02
+
+ -2.6663699746131897e-01 3.2312598824501038e-01
+ <_>
+
+ 0 -1 1629 1.2813000008463860e-02
+
+ 1.7540800571441650e-01 -6.0787999629974365e-01
+ <_>
+
+ 0 -1 1630 -1.1250999756157398e-02
+
+ -1.0852589607238770e+00 -2.8046000748872757e-02
+ <_>
+
+ 0 -1 1631 -4.1535001248121262e-02
+
+ 7.1887397766113281e-01 2.7982000261545181e-02
+ <_>
+
+ 0 -1 1632 -9.3470998108386993e-02
+
+ -1.1906319856643677e+00 -4.4810999184846878e-02
+ <_>
+
+ 0 -1 1633 -2.7249999344348907e-02
+
+ 6.2942498922348022e-01 9.5039997249841690e-03
+ <_>
+
+ 0 -1 1634 -2.1759999915957451e-02
+
+ 1.3233649730682373e+00 -1.5027000010013580e-01
+ <_>
+
+ 0 -1 1635 -9.6890004351735115e-03
+
+ -3.3947101235389709e-01 1.7085799574851990e-01
+ <_>
+
+ 0 -1 1636 6.9395996630191803e-02
+
+ -2.5657799839973450e-01 4.7652098536491394e-01
+ <_>
+
+ 0 -1 1637 3.1208999454975128e-02
+
+ 1.4154000580310822e-01 -3.4942001104354858e-01
+ <_>
+
+ 0 -1 1638 -4.9727000296115875e-02
+
+ -1.1675560474395752e+00 -4.0757998824119568e-02
+ <_>
+
+ 0 -1 1639 -2.0301999524235725e-02
+
+ -3.9486399292945862e-01 1.5814900398254395e-01
+ <_>
+
+ 0 -1 1640 -1.5367000363767147e-02
+
+ 4.9300000071525574e-01 -2.0092099905014038e-01
+ <_>
+
+ 0 -1 1641 -5.0735000520944595e-02
+
+ 1.8736059665679932e+00 8.6730003356933594e-02
+ <_>
+
+ 0 -1 1642 -2.0726000890135765e-02
+
+ -8.8938397169113159e-01 -7.3199998587369919e-03
+ <_>
+
+ 0 -1 1643 -3.0993999913334846e-02
+
+ -1.1664899587631226e+00 1.4274600148200989e-01
+ <_>
+
+ 0 -1 1644 -4.4269999489188194e-03
+
+ -6.6815102100372314e-01 4.4120000675320625e-03
+ <_>
+
+ 0 -1 1645 -4.5743998140096664e-02
+
+ -4.7955200076103210e-01 1.5121999382972717e-01
+ <_>
+
+ 0 -1 1646 1.6698999330401421e-02
+
+ 1.2048599869012833e-01 -4.5235899090766907e-01
+ <_>
+
+ 0 -1 1647 3.2210000790655613e-03
+
+ -7.7615000307559967e-02 2.7846598625183105e-01
+ <_>
+
+ 0 -1 1648 2.4434000253677368e-02
+
+ -1.9987100362777710e-01 6.7253702878952026e-01
+ <_>
+
+ 0 -1 1649 -7.9677999019622803e-02
+
+ 9.2222398519515991e-01 9.2557996511459351e-02
+ <_>
+
+ 0 -1 1650 4.4530000537633896e-02
+
+ -2.6690500974655151e-01 3.3320501446723938e-01
+ <_>
+
+ 0 -1 1651 -1.2528300285339355e-01
+
+ -5.4253101348876953e-01 1.3976299762725830e-01
+ <_>
+
+ 0 -1 1652 1.7971999943256378e-02
+
+ 1.8219999969005585e-02 -6.8048501014709473e-01
+ <_>
+
+ 0 -1 1653 1.9184000790119171e-02
+
+ -1.2583999894559383e-02 5.4126697778701782e-01
+ <_>
+
+ 0 -1 1654 4.0024001151323318e-02
+
+ -1.7638799548149109e-01 7.8810399770736694e-01
+ <_>
+
+ 0 -1 1655 1.3558999635279179e-02
+
+ 2.0737600326538086e-01 -4.7744300961494446e-01
+ <_>
+
+ 0 -1 1656 1.6220999881625175e-02
+
+ 2.3076999932527542e-02 -6.1182099580764771e-01
+ <_>
+
+ 0 -1 1657 1.1229000054299831e-02
+
+ -1.7728000879287720e-02 4.1764199733734131e-01
+ <_>
+
+ 0 -1 1658 3.9193000644445419e-02
+
+ -1.8948499858379364e-01 7.4019300937652588e-01
+ <_>
+
+ 0 -1 1659 -9.5539996400475502e-03
+
+ 4.0947100520133972e-01 -1.3508899509906769e-01
+ <_>
+
+ 0 -1 1660 2.7878999710083008e-02
+
+ -2.0350700616836548e-01 6.1625397205352783e-01
+ <_>
+
+ 0 -1 1661 -2.3600999265909195e-02
+
+ -1.6967060565948486e+00 1.4633199572563171e-01
+ <_>
+
+ 0 -1 1662 2.6930000633001328e-02
+
+ -3.0401999130845070e-02 -1.0909470319747925e+00
+ <_>
+
+ 0 -1 1663 2.8999999631196260e-04
+
+ -2.0076000690460205e-01 2.2314099967479706e-01
+ <_>
+
+ 0 -1 1664 -4.1124999523162842e-02
+
+ -4.5242199301719666e-01 5.7392001152038574e-02
+ <_>
+
+ 0 -1 1665 6.6789998672902584e-03
+
+ 2.3824900388717651e-01 -2.1262100338935852e-01
+ <_>
+
+ 0 -1 1666 4.7864999622106552e-02
+
+ -1.8194800615310669e-01 6.1918401718139648e-01
+ <_>
+
+ 0 -1 1667 -3.1679999083280563e-03
+
+ -2.7393200993537903e-01 2.5017300248146057e-01
+ <_>
+
+ 0 -1 1668 -8.6230002343654633e-03
+
+ -4.6280300617218018e-01 4.2397998273372650e-02
+ <_>
+
+ 0 -1 1669 -7.4350000359117985e-03
+
+ 4.1796800494194031e-01 -1.7079999670386314e-03
+ <_>
+
+ 0 -1 1670 -1.8769999733194709e-03
+
+ 1.4602300524711609e-01 -3.3721101284027100e-01
+ <_>
+
+ 0 -1 1671 -8.6226001381874084e-02
+
+ 7.5143402814865112e-01 1.0711999610066414e-02
+ <_>
+
+ 0 -1 1672 4.6833999454975128e-02
+
+ -1.9119599461555481e-01 4.8414900898933411e-01
+ <_>
+
+ 0 -1 1673 -9.2000002041459084e-05
+
+ 3.5220399498939514e-01 -1.7333300411701202e-01
+ <_>
+
+ 0 -1 1674 -1.6343999654054642e-02
+
+ -6.4397698640823364e-01 9.0680001303553581e-03
+ <_>
+
+ 0 -1 1675 4.5703999698162079e-02
+
+ 1.8216000869870186e-02 3.1970798969268799e-01
+ <_>
+
+ 0 -1 1676 -2.7382999658584595e-02
+
+ 1.0564049482345581e+00 -1.7276400327682495e-01
+ <_>
+
+ 0 -1 1677 -2.7602000162005424e-02
+
+ 2.9715499281883240e-01 -9.4600003212690353e-03
+ <_>
+
+ 0 -1 1678 7.6939999125897884e-03
+
+ -2.1660299599170685e-01 4.7385200858116150e-01
+ <_>
+
+ 0 -1 1679 -7.0500001311302185e-04
+
+ 2.4048799276351929e-01 -2.6776000857353210e-01
+ <_>
+
+ 0 -1 1680 1.1054199934005737e-01
+
+ -3.3539000898599625e-02 -1.0233880281448364e+00
+ <_>
+
+ 0 -1 1681 6.8765997886657715e-02
+
+ -4.3239998631179333e-03 5.7153397798538208e-01
+ <_>
+
+ 0 -1 1682 1.7999999690800905e-03
+
+ 7.7574998140335083e-02 -4.2092698812484741e-01
+ <_>
+
+ 0 -1 1683 1.9232000410556793e-01
+
+ 8.2021996378898621e-02 2.8810169696807861e+00
+ <_>
+
+ 0 -1 1684 1.5742099285125732e-01
+
+ -1.3708199560642242e-01 2.0890059471130371e+00
+ <_>
+
+ 0 -1 1685 -4.9387000501155853e-02
+
+ -1.8610910177230835e+00 1.4332099258899689e-01
+ <_>
+
+ 0 -1 1686 5.1929000765085220e-02
+
+ -1.8737000226974487e-01 5.4231601953506470e-01
+ <_>
+
+ 0 -1 1687 4.9965001642704010e-02
+
+ 1.4175300300121307e-01 -1.5625779628753662e+00
+ <_>
+
+ 0 -1 1688 -4.2633000761270523e-02
+
+ 1.6059479713439941e+00 -1.4712899923324585e-01
+ <_>
+
+ 0 -1 1689 -3.7553999572992325e-02
+
+ -8.0974900722503662e-01 1.3256999850273132e-01
+ <_>
+
+ 0 -1 1690 -3.7174999713897705e-02
+
+ -1.3945020437240601e+00 -5.7055000215768814e-02
+ <_>
+
+ 0 -1 1691 1.3945999555289745e-02
+
+ 3.3427000045776367e-02 5.7474797964096069e-01
+ <_>
+
+ 0 -1 1692 -4.4800000614486635e-04
+
+ -5.5327498912811279e-01 2.1952999755740166e-02
+ <_>
+
+ 0 -1 1693 3.1993001699447632e-02
+
+ 2.0340999588370323e-02 3.7459200620651245e-01
+ <_>
+
+ 0 -1 1694 -4.2799999937415123e-03
+
+ 4.4428700208663940e-01 -2.2999699413776398e-01
+ <_>
+
+ 0 -1 1695 9.8550003021955490e-03
+
+ 1.8315799534320831e-01 -4.0964999794960022e-01
+ <_>
+
+ 0 -1 1696 9.3356996774673462e-02
+
+ -6.3661001622676849e-02 -1.6929290294647217e+00
+ <_>
+
+ 0 -1 1697 1.7209999263286591e-02
+
+ 2.0153899490833282e-01 -4.6061098575592041e-01
+ <_>
+
+ 0 -1 1698 8.4319999441504478e-03
+
+ -3.2003998756408691e-01 1.5312199294567108e-01
+ <_>
+
+ 0 -1 1699 -1.4054999686777592e-02
+
+ 8.6882400512695312e-01 3.2575000077486038e-02
+ <_>
+
+ 0 -1 1700 -7.7180000953376293e-03
+
+ 6.3686698675155640e-01 -1.8425500392913818e-01
+ <_>
+
+ 0 -1 1701 2.8005000203847885e-02
+
+ 1.7357499897480011e-01 -4.7883599996566772e-01
+ <_>
+
+ 0 -1 1702 -1.8884999677538872e-02
+
+ 2.4101600050926208e-01 -2.6547598838806152e-01
+ <_>
+
+ 0 -1 1703 -1.8585000187158585e-02
+
+ 5.4232501983642578e-01 5.3633000701665878e-02
+ <_>
+
+ 0 -1 1704 -3.6437001079320908e-02
+
+ 2.3908898830413818e+00 -1.3634699583053589e-01
+ <_>
+
+ 0 -1 1705 3.2455001026391983e-02
+
+ 1.5910699963569641e-01 -6.7581498622894287e-01
+ <_>
+
+ 0 -1 1706 5.9781998395919800e-02
+
+ -2.3479999508708715e-03 -7.3053699731826782e-01
+ <_>
+
+ 0 -1 1707 9.8209995776414871e-03
+
+ -1.1444099992513657e-01 3.0570301413536072e-01
+ <_>
+
+ 0 -1 1708 -3.5163998603820801e-02
+
+ -1.0511469841003418e+00 -3.3103000372648239e-02
+ <_>
+
+ 0 -1 1709 2.7429999317973852e-03
+
+ -2.0135399699211121e-01 3.2754099369049072e-01
+ <_>
+
+ 0 -1 1710 8.1059997901320457e-03
+
+ -2.1383500099182129e-01 4.3362098932266235e-01
+ <_>
+
+ 0 -1 1711 8.8942997157573700e-02
+
+ 1.0940899699926376e-01 -4.7609338760375977e+00
+ <_>
+
+ 0 -1 1712 -3.0054999515414238e-02
+
+ -1.7169300317764282e+00 -6.0919001698493958e-02
+ <_>
+
+ 0 -1 1713 -2.1734999492764473e-02
+
+ 6.4778900146484375e-01 -3.2830998301506042e-02
+ <_>
+
+ 0 -1 1714 3.7648998200893402e-02
+
+ -1.0060000233352184e-02 -7.6569098234176636e-01
+ <_>
+
+ 0 -1 1715 2.7189999818801880e-03
+
+ 1.9888900220394135e-01 -8.2479000091552734e-02
+ <_>
+
+ 0 -1 1716 -1.0548000223934650e-02
+
+ -8.6613601446151733e-01 -2.5986000895500183e-02
+ <_>
+
+ 0 -1 1717 1.2966300547122955e-01
+
+ 1.3911999762058258e-01 -2.2271950244903564e+00
+ <_>
+
+ 0 -1 1718 -1.7676999792456627e-02
+
+ 3.3967700600624084e-01 -2.3989599943161011e-01
+ <_>
+
+ 0 -1 1719 -7.7051997184753418e-02
+
+ -2.5017969608306885e+00 1.2841999530792236e-01
+ <_>
+
+ 0 -1 1720 -1.9230000674724579e-02
+
+ 5.0641202926635742e-01 -1.9751599431037903e-01
+ <_>
+
+ 0 -1 1721 -5.1222998648881912e-02
+
+ -2.9333369731903076e+00 1.3858500123023987e-01
+ <_>
+
+ 0 -1 1722 2.0830000285059214e-03
+
+ -6.0043597221374512e-01 2.9718000441789627e-02
+ <_>
+
+ 0 -1 1723 2.5418000295758247e-02
+
+ 3.3915799856185913e-01 -1.4392000436782837e-01
+ <_>
+
+ 0 -1 1724 -2.3905999958515167e-02
+
+ -1.1082680225372314e+00 -4.7377001494169235e-02
+ <_>
+
+ 0 -1 1725 -6.3740001060068607e-03
+
+ 4.4533699750900269e-01 -6.7052997648715973e-02
+ <_>
+
+ 0 -1 1726 -3.7698999047279358e-02
+
+ -1.0406579971313477e+00 -4.1790001094341278e-02
+ <_>
+
+ 0 -1 1727 2.1655100584030151e-01
+
+ 3.3863000571727753e-02 8.2017302513122559e-01
+ <_>
+
+ 0 -1 1728 -1.3400999829173088e-02
+
+ 5.2903497219085693e-01 -1.9133000075817108e-01
+ <_>
+ 196
+ -3.2103500366210938e+00
+
+ <_>
+
+ 0 -1 1729 7.1268998086452484e-02
+
+ -5.3631198406219482e-01 6.0715299844741821e-01
+ <_>
+
+ 0 -1 1730 5.6111000478267670e-02
+
+ -5.0141602754592896e-01 4.3976101279258728e-01
+ <_>
+
+ 0 -1 1731 4.0463998913764954e-02
+
+ -3.2922199368476868e-01 5.4834699630737305e-01
+ <_>
+
+ 0 -1 1732 6.3155002892017365e-02
+
+ -3.1701698899269104e-01 4.6152999997138977e-01
+ <_>
+
+ 0 -1 1733 1.0320999659597874e-02
+
+ 1.0694999992847443e-01 -9.8243898153305054e-01
+ <_>
+
+ 0 -1 1734 6.2606997787952423e-02
+
+ -1.4329700171947479e-01 7.1095001697540283e-01
+ <_>
+
+ 0 -1 1735 -3.9416000247001648e-02
+
+ 9.4380199909210205e-01 -2.1572099626064301e-01
+ <_>
+
+ 0 -1 1736 -5.3960001096129417e-03
+
+ -5.4611998796463013e-01 2.5303798913955688e-01
+ <_>
+
+ 0 -1 1737 1.0773199796676636e-01
+
+ 1.2496000155806541e-02 -1.0809199810028076e+00
+ <_>
+
+ 0 -1 1738 1.6982000321149826e-02
+
+ -3.1536400318145752e-01 5.1239997148513794e-01
+ <_>
+
+ 0 -1 1739 3.1216999515891075e-02
+
+ -4.5199999585747719e-03 -1.2443480491638184e+00
+ <_>
+
+ 0 -1 1740 -2.3106999695301056e-02
+
+ -7.6492899656295776e-01 2.0640599727630615e-01
+ <_>
+
+ 0 -1 1741 -1.1203999631106853e-02
+
+ 2.4092699587345123e-01 -3.5142099857330322e-01
+ <_>
+
+ 0 -1 1742 -4.7479998320341110e-03
+
+ -9.7007997334003448e-02 2.0638099312782288e-01
+ <_>
+
+ 0 -1 1743 -1.7358999699354172e-02
+
+ -7.9020297527313232e-01 2.1852999925613403e-02
+ <_>
+
+ 0 -1 1744 1.8851999193429947e-02
+
+ -1.0394600033760071e-01 5.4844200611114502e-01
+ <_>
+
+ 0 -1 1745 7.2249998338520527e-03
+
+ -4.0409401059150696e-01 2.6763799786567688e-01
+ <_>
+
+ 0 -1 1746 1.8915999680757523e-02
+
+ 2.0508000254631042e-01 -1.0206340551376343e+00
+ <_>
+
+ 0 -1 1747 3.1156999990344048e-02
+
+ 1.2400000123307109e-03 -8.7293499708175659e-01
+ <_>
+
+ 0 -1 1748 2.0951999351382256e-02
+
+ -5.5559999309480190e-03 8.0356198549270630e-01
+ <_>
+
+ 0 -1 1749 1.1291000060737133e-02
+
+ -3.6478400230407715e-01 2.2767899930477142e-01
+ <_>
+
+ 0 -1 1750 -5.7011000812053680e-02
+
+ -1.4295619726181030e+00 1.4322000741958618e-01
+ <_>
+
+ 0 -1 1751 7.2194002568721771e-02
+
+ -4.1850000619888306e-02 -1.9111829996109009e+00
+ <_>
+
+ 0 -1 1752 -1.9874000921845436e-02
+
+ 2.6425498723983765e-01 -3.2617700099945068e-01
+ <_>
+
+ 0 -1 1753 -1.6692999750375748e-02
+
+ -8.3907800912857056e-01 4.0799999260343611e-04
+ <_>
+
+ 0 -1 1754 -3.9834998548030853e-02
+
+ -4.8858499526977539e-01 1.6436100006103516e-01
+ <_>
+
+ 0 -1 1755 2.7009999379515648e-02
+
+ -1.8862499296665192e-01 8.3419400453567505e-01
+ <_>
+
+ 0 -1 1756 -3.9420002140104771e-03
+
+ 2.3231500387191772e-01 -7.2360001504421234e-02
+ <_>
+
+ 0 -1 1757 2.2833000868558884e-02
+
+ -3.5884000360965729e-02 -1.1549400091171265e+00
+ <_>
+
+ 0 -1 1758 -6.8888001143932343e-02
+
+ -1.7837309837341309e+00 1.5159000456333160e-01
+ <_>
+
+ 0 -1 1759 4.3097000569105148e-02
+
+ -2.1608099341392517e-01 5.0624102354049683e-01
+ <_>
+
+ 0 -1 1760 8.6239995434880257e-03
+
+ -1.7795599997043610e-01 2.8957900404930115e-01
+ <_>
+
+ 0 -1 1761 1.4561000280082226e-02
+
+ -1.1408000253140926e-02 -8.9402002096176147e-01
+ <_>
+
+ 0 -1 1762 -1.1501000262796879e-02
+
+ 3.0171999335289001e-01 -4.3659001588821411e-02
+ <_>
+
+ 0 -1 1763 -1.0971499979496002e-01
+
+ -9.5147097110748291e-01 -1.9973000511527061e-02
+ <_>
+
+ 0 -1 1764 4.5228000730276108e-02
+
+ 3.3110998570919037e-02 9.6619802713394165e-01
+ <_>
+
+ 0 -1 1765 -2.7047999203205109e-02
+
+ 9.7963601350784302e-01 -1.7261900007724762e-01
+ <_>
+
+ 0 -1 1766 1.8030999228358269e-02
+
+ -2.0801000297069550e-02 2.7385899424552917e-01
+ <_>
+
+ 0 -1 1767 5.0524998456239700e-02
+
+ -5.6802999228239059e-02 -1.7775089740753174e+00
+ <_>
+
+ 0 -1 1768 -2.9923999682068825e-02
+
+ 6.5329200029373169e-01 -2.3537000641226768e-02
+ <_>
+
+ 0 -1 1769 3.8058001548051834e-02
+
+ 2.6317000389099121e-02 -7.0665699243545532e-01
+ <_>
+
+ 0 -1 1770 1.8563899397850037e-01
+
+ -5.6039998307824135e-03 3.2873699069023132e-01
+ <_>
+
+ 0 -1 1771 -4.0670000016689301e-03
+
+ 3.4204798936843872e-01 -3.0171599984169006e-01
+ <_>
+
+ 0 -1 1772 1.0108999907970428e-02
+
+ -7.3600001633167267e-03 5.7981598377227783e-01
+ <_>
+
+ 0 -1 1773 -1.1567000299692154e-02
+
+ -5.2722197771072388e-01 4.6447999775409698e-02
+ <_>
+
+ 0 -1 1774 -6.5649999305605888e-03
+
+ -5.8529102802276611e-01 1.9101899862289429e-01
+ <_>
+
+ 0 -1 1775 1.0582000017166138e-02
+
+ 2.1073000505566597e-02 -6.8892598152160645e-01
+ <_>
+
+ 0 -1 1776 -2.0304000005125999e-02
+
+ -3.6400699615478516e-01 1.5338799357414246e-01
+ <_>
+
+ 0 -1 1777 2.3529999889433384e-03
+
+ 3.6164000630378723e-02 -5.9825098514556885e-01
+ <_>
+
+ 0 -1 1778 -1.4690000098198652e-03
+
+ -1.4707699418067932e-01 3.7507998943328857e-01
+ <_>
+
+ 0 -1 1779 8.6449999362230301e-03
+
+ -2.1708500385284424e-01 5.1936799287796021e-01
+ <_>
+
+ 0 -1 1780 -2.4326000362634659e-02
+
+ -1.0846769809722900e+00 1.4084799587726593e-01
+ <_>
+
+ 0 -1 1781 7.4418999254703522e-02
+
+ -1.5513800084590912e-01 1.1822769641876221e+00
+ <_>
+
+ 0 -1 1782 1.7077999189496040e-02
+
+ 4.4231001287698746e-02 9.1561102867126465e-01
+ <_>
+
+ 0 -1 1783 -2.4577999487519264e-02
+
+ -1.5504100322723389e+00 -5.4745998233556747e-02
+ <_>
+
+ 0 -1 1784 3.0205000191926956e-02
+
+ 1.6662800312042236e-01 -1.0001239776611328e+00
+ <_>
+
+ 0 -1 1785 1.2136000208556652e-02
+
+ -7.7079099416732788e-01 -4.8639997839927673e-03
+ <_>
+
+ 0 -1 1786 8.6717002093791962e-02
+
+ 1.1061699688434601e-01 -1.6857999563217163e+00
+ <_>
+
+ 0 -1 1787 -4.2309001088142395e-02
+
+ 1.1075930595397949e+00 -1.5438599884510040e-01
+ <_>
+
+ 0 -1 1788 -2.6420000940561295e-03
+
+ 2.7451899647712708e-01 -1.8456199765205383e-01
+ <_>
+
+ 0 -1 1789 -5.6662000715732574e-02
+
+ -8.0625599622726440e-01 -1.6928000375628471e-02
+ <_>
+
+ 0 -1 1790 2.3475000634789467e-02
+
+ 1.4187699556350708e-01 -2.5500899553298950e-01
+ <_>
+
+ 0 -1 1791 -2.0803000777959824e-02
+
+ 1.9826300442218781e-01 -3.1171199679374695e-01
+ <_>
+
+ 0 -1 1792 7.2599998675286770e-03
+
+ -5.0590999424457550e-02 4.1923800110816956e-01
+ <_>
+
+ 0 -1 1793 3.4160000085830688e-01
+
+ -1.6674900054931641e-01 9.2748600244522095e-01
+ <_>
+
+ 0 -1 1794 6.2029999680817127e-03
+
+ -1.2625899910926819e-01 4.0445300936698914e-01
+ <_>
+
+ 0 -1 1795 3.2692000269889832e-02
+
+ -3.2634999603033066e-02 -9.8939800262451172e-01
+ <_>
+
+ 0 -1 1796 2.1100000594742596e-04
+
+ -6.4534001052379608e-02 2.5473698973655701e-01
+ <_>
+
+ 0 -1 1797 7.2100001852959394e-04
+
+ -3.6618599295616150e-01 1.1973100155591965e-01
+ <_>
+
+ 0 -1 1798 5.4490998387336731e-02
+
+ 1.2073499709367752e-01 -1.0291390419006348e+00
+ <_>
+
+ 0 -1 1799 -1.0141000151634216e-02
+
+ -5.2177202701568604e-01 3.3734999597072601e-02
+ <_>
+
+ 0 -1 1800 -1.8815999850630760e-02
+
+ 6.5181797742843628e-01 1.3399999588727951e-03
+ <_>
+
+ 0 -1 1801 -5.3480002097785473e-03
+
+ 1.7370699346065521e-01 -3.4132000803947449e-01
+ <_>
+
+ 0 -1 1802 -1.0847000405192375e-02
+
+ -1.9699899852275848e-01 1.5045499801635742e-01
+ <_>
+
+ 0 -1 1803 -4.9926001578569412e-02
+
+ -5.0888502597808838e-01 3.0762000009417534e-02
+ <_>
+
+ 0 -1 1804 1.2160000391304493e-02
+
+ -6.9251999258995056e-02 1.8745499849319458e-01
+ <_>
+
+ 0 -1 1805 -2.2189998999238014e-03
+
+ -4.0849098563194275e-01 7.9954996705055237e-02
+ <_>
+
+ 0 -1 1806 3.1580000650137663e-03
+
+ -2.1124599874019623e-01 2.2366400063037872e-01
+ <_>
+
+ 0 -1 1807 4.1439998894929886e-03
+
+ -4.9900299310684204e-01 6.2917001545429230e-02
+ <_>
+
+ 0 -1 1808 -7.3730000294744968e-03
+
+ -2.0553299784660339e-01 2.2096699476242065e-01
+ <_>
+
+ 0 -1 1809 5.1812000572681427e-02
+
+ 1.8096800148487091e-01 -4.3495801091194153e-01
+ <_>
+
+ 0 -1 1810 1.8340000882744789e-02
+
+ 1.5200000256299973e-02 3.7991699576377869e-01
+ <_>
+
+ 0 -1 1811 1.7490799725055695e-01
+
+ -2.0920799672603607e-01 4.0013000369071960e-01
+ <_>
+
+ 0 -1 1812 5.3993999958038330e-02
+
+ 2.4751600623130798e-01 -2.6712900400161743e-01
+ <_>
+
+ 0 -1 1813 -3.2033199071884155e-01
+
+ -1.9094380140304565e+00 -6.6960997879505157e-02
+ <_>
+
+ 0 -1 1814 -2.7060000225901604e-02
+
+ -7.1371299028396606e-01 1.5904599428176880e-01
+ <_>
+
+ 0 -1 1815 7.7463999390602112e-02
+
+ -1.6970199346542358e-01 7.7552998065948486e-01
+ <_>
+
+ 0 -1 1816 2.3771999403834343e-02
+
+ 1.9021899998188019e-01 -6.0162097215652466e-01
+ <_>
+
+ 0 -1 1817 1.1501000262796879e-02
+
+ 7.7039999887347221e-03 -6.1730301380157471e-01
+ <_>
+
+ 0 -1 1818 3.2616000622510910e-02
+
+ 1.7159199714660645e-01 -7.0978200435638428e-01
+ <_>
+
+ 0 -1 1819 -4.4383000582456589e-02
+
+ -2.2606229782104492e+00 -7.3276996612548828e-02
+ <_>
+
+ 0 -1 1820 -5.8476001024246216e-02
+
+ 2.4087750911712646e+00 8.3091996610164642e-02
+ <_>
+
+ 0 -1 1821 1.9303999841213226e-02
+
+ -2.7082300186157227e-01 2.7369999885559082e-01
+ <_>
+
+ 0 -1 1822 -4.4705998152494431e-02
+
+ 3.1355598568916321e-01 -6.2492001801729202e-02
+ <_>
+
+ 0 -1 1823 -6.0334999114274979e-02
+
+ -1.4515119791030884e+00 -5.8761000633239746e-02
+ <_>
+
+ 0 -1 1824 1.1667000129818916e-02
+
+ -1.8084999173879623e-02 5.0479698181152344e-01
+ <_>
+
+ 0 -1 1825 2.8009999543428421e-02
+
+ -2.3302899301052094e-01 3.0708700418472290e-01
+ <_>
+
+ 0 -1 1826 6.5397001802921295e-02
+
+ 1.4135900139808655e-01 -5.0010901689529419e-01
+ <_>
+
+ 0 -1 1827 9.6239997074007988e-03
+
+ -2.2054600715637207e-01 3.9191201329231262e-01
+ <_>
+
+ 0 -1 1828 2.5510000996291637e-03
+
+ -1.1381500214338303e-01 2.0032300055027008e-01
+ <_>
+
+ 0 -1 1829 3.1847000122070312e-02
+
+ 2.5476999580860138e-02 -5.3326398134231567e-01
+ <_>
+
+ 0 -1 1830 3.3055000007152557e-02
+
+ 1.7807699739933014e-01 -6.2793898582458496e-01
+ <_>
+
+ 0 -1 1831 4.7600999474525452e-02
+
+ -1.4747899770736694e-01 1.4204180240631104e+00
+ <_>
+
+ 0 -1 1832 -1.9571999087929726e-02
+
+ -5.2693498134613037e-01 1.5838600695133209e-01
+ <_>
+
+ 0 -1 1833 -5.4730001837015152e-02
+
+ 8.8231599330902100e-01 -1.6627800464630127e-01
+ <_>
+
+ 0 -1 1834 -2.2686000913381577e-02
+
+ -4.8386898636817932e-01 1.5000100433826447e-01
+ <_>
+
+ 0 -1 1835 1.0713200271129608e-01
+
+ -2.1336199343204498e-01 4.2333900928497314e-01
+ <_>
+
+ 0 -1 1836 -3.6380000412464142e-02
+
+ -7.4198000133037567e-02 1.4589400589466095e-01
+ <_>
+
+ 0 -1 1837 1.3935999944806099e-02
+
+ -2.4911600351333618e-01 2.6771199703216553e-01
+ <_>
+
+ 0 -1 1838 2.0991999655961990e-02
+
+ 8.7959999218583107e-03 4.3064999580383301e-01
+ <_>
+
+ 0 -1 1839 4.9118999391794205e-02
+
+ -1.7591999471187592e-01 6.9282901287078857e-01
+ <_>
+
+ 0 -1 1840 3.6315999925136566e-02
+
+ 1.3145299255847931e-01 -3.3597299456596375e-01
+ <_>
+
+ 0 -1 1841 4.1228000074625015e-02
+
+ -4.5692000538110733e-02 -1.3515930175781250e+00
+ <_>
+
+ 0 -1 1842 1.5672000125050545e-02
+
+ 1.7544099688529968e-01 -6.0550000518560410e-02
+ <_>
+
+ 0 -1 1843 -1.6286000609397888e-02
+
+ -1.1308189630508423e+00 -3.9533000439405441e-02
+ <_>
+
+ 0 -1 1844 -3.0229999683797359e-03
+
+ -2.2454300522804260e-01 2.3628099262714386e-01
+ <_>
+
+ 0 -1 1845 -1.3786299526691437e-01
+
+ 4.5376899838447571e-01 -2.1098700165748596e-01
+ <_>
+
+ 0 -1 1846 -9.6760001033544540e-03
+
+ -1.5105099976062775e-01 2.0781700313091278e-01
+ <_>
+
+ 0 -1 1847 -2.4839999154210091e-02
+
+ -6.8350297212600708e-01 -8.0040004104375839e-03
+ <_>
+
+ 0 -1 1848 -1.3964399695396423e-01
+
+ 6.5011298656463623e-01 4.6544000506401062e-02
+ <_>
+
+ 0 -1 1849 -8.2153998315334320e-02
+
+ 4.4887199997901917e-01 -2.3591999709606171e-01
+ <_>
+
+ 0 -1 1850 3.8449999410659075e-03
+
+ -8.8173002004623413e-02 2.7346798777580261e-01
+ <_>
+
+ 0 -1 1851 -6.6579999402165413e-03
+
+ -4.6866598725318909e-01 7.7001996338367462e-02
+ <_>
+
+ 0 -1 1852 -1.5898000448942184e-02
+
+ 2.9268398880958557e-01 -2.1941000595688820e-02
+ <_>
+
+ 0 -1 1853 -5.0946000963449478e-02
+
+ -1.2093789577484131e+00 -4.2109999805688858e-02
+ <_>
+
+ 0 -1 1854 1.6837999224662781e-02
+
+ -4.5595999807119370e-02 5.0180697441101074e-01
+ <_>
+
+ 0 -1 1855 1.5918999910354614e-02
+
+ -2.6904299855232239e-01 2.6516300439834595e-01
+ <_>
+
+ 0 -1 1856 3.6309999413788319e-03
+
+ -1.3046100735664368e-01 3.1807100772857666e-01
+ <_>
+
+ 0 -1 1857 -8.6144998669624329e-02
+
+ 1.9443659782409668e+00 -1.3978299498558044e-01
+ <_>
+
+ 0 -1 1858 3.3140998333692551e-02
+
+ 1.5266799926757812e-01 -3.0866000801324844e-02
+ <_>
+
+ 0 -1 1859 -3.9679999463260174e-03
+
+ -7.1202301979064941e-01 -1.3844000175595284e-02
+ <_>
+
+ 0 -1 1860 -2.4008000269532204e-02
+
+ 9.2007797956466675e-01 4.6723999083042145e-02
+ <_>
+
+ 0 -1 1861 8.7320003658533096e-03
+
+ -2.2567300498485565e-01 3.1931799650192261e-01
+ <_>
+
+ 0 -1 1862 -2.7786999940872192e-02
+
+ -7.2337102890014648e-01 1.7018599808216095e-01
+ <_>
+
+ 0 -1 1863 -1.9455300271511078e-01
+
+ 1.2461860179901123e+00 -1.4736199378967285e-01
+ <_>
+
+ 0 -1 1864 -1.0869699716567993e-01
+
+ -1.4465179443359375e+00 1.2145300209522247e-01
+ <_>
+
+ 0 -1 1865 -1.9494999200105667e-02
+
+ -7.8153097629547119e-01 -2.3732999339699745e-02
+ <_>
+
+ 0 -1 1866 3.0650000553578138e-03
+
+ -8.5471397638320923e-01 1.6686999797821045e-01
+ <_>
+
+ 0 -1 1867 5.9193998575210571e-02
+
+ -1.4853699505329132e-01 1.1273469924926758e+00
+ <_>
+
+ 0 -1 1868 -5.4207999259233475e-02
+
+ 5.4726999998092651e-01 3.5523999482393265e-02
+ <_>
+
+ 0 -1 1869 -3.9324998855590820e-02
+
+ 3.6642599105834961e-01 -2.0543999969959259e-01
+ <_>
+
+ 0 -1 1870 8.2278996706008911e-02
+
+ -3.5007998347282410e-02 5.3994202613830566e-01
+ <_>
+
+ 0 -1 1871 -7.4479999020695686e-03
+
+ -6.1537498235702515e-01 -3.5319998860359192e-03
+ <_>
+
+ 0 -1 1872 7.3770000599324703e-03
+
+ -6.5591000020503998e-02 4.1961398720741272e-01
+ <_>
+
+ 0 -1 1873 7.0779998786747456e-03
+
+ -3.4129500389099121e-01 1.2536799907684326e-01
+ <_>
+
+ 0 -1 1874 -1.5581999905407429e-02
+
+ -3.0240398645401001e-01 2.1511000394821167e-01
+ <_>
+
+ 0 -1 1875 -2.7399999089539051e-03
+
+ 7.6553001999855042e-02 -4.1060501337051392e-01
+ <_>
+
+ 0 -1 1876 -7.0600003004074097e-02
+
+ -9.7356200218200684e-01 1.1241800338029861e-01
+ <_>
+
+ 0 -1 1877 -1.1706000193953514e-02
+
+ 1.8560700118541718e-01 -2.9755198955535889e-01
+ <_>
+
+ 0 -1 1878 7.1499997284263372e-04
+
+ -5.9650000184774399e-02 2.4824699759483337e-01
+ <_>
+
+ 0 -1 1879 -3.6866001784801483e-02
+
+ 3.2751700282096863e-01 -2.3059600591659546e-01
+ <_>
+
+ 0 -1 1880 -3.2526999711990356e-02
+
+ -2.9320299625396729e-01 1.5427699685096741e-01
+ <_>
+
+ 0 -1 1881 -7.4813999235630035e-02
+
+ -1.2143570184707642e+00 -5.2244000136852264e-02
+ <_>
+
+ 0 -1 1882 4.1469998657703400e-02
+
+ 1.3062499463558197e-01 -2.3274369239807129e+00
+ <_>
+
+ 0 -1 1883 -2.8880000114440918e-02
+
+ -6.6074597835540771e-01 -9.0960003435611725e-03
+ <_>
+
+ 0 -1 1884 4.6381998807191849e-02
+
+ 1.6630199551582336e-01 -6.6949498653411865e-01
+ <_>
+
+ 0 -1 1885 2.5424998998641968e-01
+
+ -5.4641999304294586e-02 -1.2676080465316772e+00
+ <_>
+
+ 0 -1 1886 2.4000001139938831e-03
+
+ 2.0276799798011780e-01 1.4667999930679798e-02
+ <_>
+
+ 0 -1 1887 -8.2805998623371124e-02
+
+ -7.8713601827621460e-01 -2.4468999356031418e-02
+ <_>
+
+ 0 -1 1888 -1.1438000015914440e-02
+
+ 2.8623399138450623e-01 -3.0894000083208084e-02
+ <_>
+
+ 0 -1 1889 -1.2913399934768677e-01
+
+ 1.7292929887771606e+00 -1.4293900132179260e-01
+ <_>
+
+ 0 -1 1890 3.8552999496459961e-02
+
+ 1.9232999533414841e-02 3.7732601165771484e-01
+ <_>
+
+ 0 -1 1891 1.0191400349140167e-01
+
+ -7.4533998966217041e-02 -3.3868899345397949e+00
+ <_>
+
+ 0 -1 1892 -1.9068000838160515e-02
+
+ 3.1814101338386536e-01 1.9261000677943230e-02
+ <_>
+
+ 0 -1 1893 -6.0775000602006912e-02
+
+ 7.6936298608779907e-01 -1.7644000053405762e-01
+ <_>
+
+ 0 -1 1894 2.4679999798536301e-02
+
+ 1.8396499752998352e-01 -3.0868801474571228e-01
+ <_>
+
+ 0 -1 1895 2.6759000495076180e-02
+
+ -2.3454900085926056e-01 3.3056598901748657e-01
+ <_>
+
+ 0 -1 1896 1.4969999901950359e-02
+
+ 1.7213599383831024e-01 -1.8248899281024933e-01
+ <_>
+
+ 0 -1 1897 2.6142999529838562e-02
+
+ -4.6463999897241592e-02 -1.1318379640579224e+00
+ <_>
+
+ 0 -1 1898 -3.7512000650167465e-02
+
+ 8.0404001474380493e-01 6.9660000503063202e-02
+ <_>
+
+ 0 -1 1899 -5.3229997865855694e-03
+
+ -8.1884402036666870e-01 -1.8224999308586121e-02
+ <_>
+
+ 0 -1 1900 1.7813000828027725e-02
+
+ 1.4957800507545471e-01 -1.8667200207710266e-01
+ <_>
+
+ 0 -1 1901 -3.4010000526905060e-02
+
+ -7.2852301597595215e-01 -1.6615999862551689e-02
+ <_>
+
+ 0 -1 1902 -1.5953000634908676e-02
+
+ 5.6944000720977783e-01 1.3832000084221363e-02
+ <_>
+
+ 0 -1 1903 1.9743999466300011e-02
+
+ 4.0525000542402267e-02 -4.1773399710655212e-01
+ <_>
+
+ 0 -1 1904 -1.0374800115823746e-01
+
+ -1.9825149774551392e+00 1.1960200220346451e-01
+ <_>
+
+ 0 -1 1905 -1.9285000860691071e-02
+
+ 5.0230598449707031e-01 -1.9745899736881256e-01
+ <_>
+
+ 0 -1 1906 -1.2780000455677509e-02
+
+ 4.0195000171661377e-01 -2.6957999914884567e-02
+ <_>
+
+ 0 -1 1907 -1.6352999955415726e-02
+
+ -7.6608800888061523e-01 -2.4209000170230865e-02
+ <_>
+
+ 0 -1 1908 -1.2763699889183044e-01
+
+ 8.6578500270843506e-01 6.4205996692180634e-02
+ <_>
+
+ 0 -1 1909 1.9068999215960503e-02
+
+ -5.5929797887802124e-01 -1.6880000475794077e-03
+ <_>
+
+ 0 -1 1910 3.2480999827384949e-02
+
+ 4.0722001343965530e-02 4.8925098776817322e-01
+ <_>
+
+ 0 -1 1911 9.4849998131394386e-03
+
+ -1.9231900572776794e-01 5.1139700412750244e-01
+ <_>
+
+ 0 -1 1912 5.0470000132918358e-03
+
+ 1.8706800043582916e-01 -1.6113600134849548e-01
+ <_>
+
+ 0 -1 1913 4.1267998516559601e-02
+
+ -4.8817999660968781e-02 -1.1326299905776978e+00
+ <_>
+
+ 0 -1 1914 -7.6358996331691742e-02
+
+ 1.4169390201568604e+00 8.7319999933242798e-02
+ <_>
+
+ 0 -1 1915 -7.2834998369216919e-02
+
+ 1.3189860582351685e+00 -1.4819100499153137e-01
+ <_>
+
+ 0 -1 1916 5.9576999396085739e-02
+
+ 4.8376999795436859e-02 8.5611802339553833e-01
+ <_>
+
+ 0 -1 1917 2.0263999700546265e-02
+
+ -2.1044099330902100e-01 3.3858999609947205e-01
+ <_>
+
+ 0 -1 1918 -8.0301001667976379e-02
+
+ -1.2464400529861450e+00 1.1857099831104279e-01
+ <_>
+
+ 0 -1 1919 -1.7835000529885292e-02
+
+ 2.5782299041748047e-01 -2.4564799666404724e-01
+ <_>
+
+ 0 -1 1920 1.1431000195443630e-02
+
+ 2.2949799895286560e-01 -2.9497599601745605e-01
+ <_>
+
+ 0 -1 1921 -2.5541000068187714e-02
+
+ -8.6252999305725098e-01 -7.0400000549852848e-04
+ <_>
+
+ 0 -1 1922 -7.6899997657164931e-04
+
+ 3.1511399149894714e-01 -1.4349000155925751e-01
+ <_>
+
+ 0 -1 1923 -1.4453999698162079e-02
+
+ 2.5148499011993408e-01 -2.8232899308204651e-01
+ <_>
+
+ 0 -1 1924 8.6730001494288445e-03
+
+ 2.6601400971412659e-01 -2.8190800547599792e-01
+ <_>
+ 197
+ -3.2772979736328125e+00
+
+ <_>
+
+ 0 -1 1925 5.4708998650312424e-02
+
+ -5.4144299030303955e-01 6.1043000221252441e-01
+ <_>
+
+ 0 -1 1926 -1.0838799923658371e-01
+
+ 7.1739900112152100e-01 -4.1196098923683167e-01
+ <_>
+
+ 0 -1 1927 2.2996999323368073e-02
+
+ -5.8269798755645752e-01 2.9645600914955139e-01
+ <_>
+
+ 0 -1 1928 2.7540000155568123e-03
+
+ -7.4243897199630737e-01 1.4183300733566284e-01
+ <_>
+
+ 0 -1 1929 -2.1520000882446766e-03
+
+ 1.7879900336265564e-01 -6.8548601865768433e-01
+ <_>
+
+ 0 -1 1930 -2.2559000179171562e-02
+
+ -1.0775549411773682e+00 1.2388999760150909e-01
+ <_>
+
+ 0 -1 1931 8.3025000989437103e-02
+
+ 2.4500999599695206e-02 -1.0251879692077637e+00
+ <_>
+
+ 0 -1 1932 -6.6740000620484352e-03
+
+ -4.5283100008964539e-01 2.1230199933052063e-01
+ <_>
+
+ 0 -1 1933 7.6485000550746918e-02
+
+ -2.6972699165344238e-01 4.8580199480056763e-01
+ <_>
+
+ 0 -1 1934 5.4910001344978809e-03
+
+ -4.8871201276779175e-01 3.1616398692131042e-01
+ <_>
+
+ 0 -1 1935 -1.0414999909698963e-02
+
+ 4.1512900590896606e-01 -3.0044800043106079e-01
+ <_>
+
+ 0 -1 1936 2.7607999742031097e-02
+
+ 1.6203799843788147e-01 -9.9868500232696533e-01
+ <_>
+
+ 0 -1 1937 -2.3272000253200531e-02
+
+ -1.1024399995803833e+00 2.1124999970197678e-02
+ <_>
+
+ 0 -1 1938 -5.5619999766349792e-02
+
+ 6.5033102035522461e-01 -2.7938000857830048e-02
+ <_>
+
+ 0 -1 1939 -4.0631998330354691e-02
+
+ 4.2117300629615784e-01 -2.6763799786567688e-01
+ <_>
+
+ 0 -1 1940 -7.3560001328587532e-03
+
+ 3.5277798771858215e-01 -3.7854000926017761e-01
+ <_>
+
+ 0 -1 1941 1.7007000744342804e-02
+
+ -2.9189500212669373e-01 4.1053798794746399e-01
+ <_>
+
+ 0 -1 1942 -3.7034001201391220e-02
+
+ -1.3216309547424316e+00 1.2966500222682953e-01
+ <_>
+
+ 0 -1 1943 -1.9633000716567039e-02
+
+ -8.7702298164367676e-01 1.0799999581649899e-03
+ <_>
+
+ 0 -1 1944 -2.3546999320387840e-02
+
+ 2.6106101274490356e-01 -2.1481400728225708e-01
+ <_>
+
+ 0 -1 1945 -4.3352998793125153e-02
+
+ -9.9089699983596802e-01 -9.9560003727674484e-03
+ <_>
+
+ 0 -1 1946 -2.2183999419212341e-02
+
+ 6.3454401493072510e-01 -5.6547001004219055e-02
+ <_>
+
+ 0 -1 1947 1.6530999913811684e-02
+
+ 2.4664999917149544e-02 -7.3326802253723145e-01
+ <_>
+
+ 0 -1 1948 -3.2744001597166061e-02
+
+ -5.6297200918197632e-01 1.6640299558639526e-01
+ <_>
+
+ 0 -1 1949 7.1415998041629791e-02
+
+ -3.0000001424923539e-04 -9.3286401033401489e-01
+ <_>
+
+ 0 -1 1950 8.0999999772757292e-04
+
+ -9.5380000770092010e-02 2.5184699892997742e-01
+ <_>
+
+ 0 -1 1951 -8.4090000018477440e-03
+
+ -6.5496802330017090e-01 6.7300997674465179e-02
+ <_>
+
+ 0 -1 1952 -1.7254000529646873e-02
+
+ -4.6492999792098999e-01 1.6070899367332458e-01
+ <_>
+
+ 0 -1 1953 -1.8641000613570213e-02
+
+ -1.0594010353088379e+00 -1.9617000594735146e-02
+ <_>
+
+ 0 -1 1954 -9.1979997232556343e-03
+
+ 5.0716197490692139e-01 -1.5339200198650360e-01
+ <_>
+
+ 0 -1 1955 1.8538000062108040e-02
+
+ -3.0498200654983521e-01 7.3506200313568115e-01
+ <_>
+
+ 0 -1 1956 -5.0335001200437546e-02
+
+ -1.1140480041503906e+00 1.8000100553035736e-01
+ <_>
+
+ 0 -1 1957 -2.3529000580310822e-02
+
+ -8.6907899379730225e-01 -1.2459999881684780e-02
+ <_>
+
+ 0 -1 1958 -2.7100000530481339e-02
+
+ 6.5942901372909546e-01 -3.5323999822139740e-02
+ <_>
+
+ 0 -1 1959 6.5879998728632927e-03
+
+ -2.2953400015830994e-01 4.2425099015235901e-01
+ <_>
+
+ 0 -1 1960 2.3360000923275948e-02
+
+ 1.8356199562549591e-01 -9.8587298393249512e-01
+ <_>
+
+ 0 -1 1961 1.2946999631822109e-02
+
+ -3.3147400617599487e-01 2.1323199570178986e-01
+ <_>
+
+ 0 -1 1962 -6.6559999249875546e-03
+
+ -1.1951400339603424e-01 2.9752799868583679e-01
+ <_>
+
+ 0 -1 1963 -2.2570999339222908e-02
+
+ 3.8499400019645691e-01 -2.4434499442577362e-01
+ <_>
+
+ 0 -1 1964 -6.3813999295234680e-02
+
+ -8.9383500814437866e-01 1.4217500388622284e-01
+ <_>
+
+ 0 -1 1965 -4.9945000559091568e-02
+
+ 5.3864401578903198e-01 -2.0485299825668335e-01
+ <_>
+
+ 0 -1 1966 6.8319998681545258e-03
+
+ -5.6678999215364456e-02 3.9970999956130981e-01
+ <_>
+
+ 0 -1 1967 -5.5835999548435211e-02
+
+ -1.5239470005035400e+00 -5.1183000206947327e-02
+ <_>
+
+ 0 -1 1968 3.1957000494003296e-01
+
+ 7.4574001133441925e-02 1.2447799444198608e+00
+ <_>
+
+ 0 -1 1969 8.0955997109413147e-02
+
+ -1.9665500521659851e-01 5.9889698028564453e-01
+ <_>
+
+ 0 -1 1970 -1.4911999925971031e-02
+
+ -6.4020597934722900e-01 1.5807600319385529e-01
+ <_>
+
+ 0 -1 1971 4.6709001064300537e-02
+
+ 8.5239000618457794e-02 -4.5487201213836670e-01
+ <_>
+
+ 0 -1 1972 6.0539999976754189e-03
+
+ -4.3184000253677368e-01 2.2452600300312042e-01
+ <_>
+
+ 0 -1 1973 -3.4375999122858047e-02
+
+ 4.0202501416206360e-01 -2.3903599381446838e-01
+ <_>
+
+ 0 -1 1974 -3.4924000501632690e-02
+
+ 5.2870100736618042e-01 3.9709001779556274e-02
+ <_>
+
+ 0 -1 1975 3.0030000489205122e-03
+
+ -3.8754299283027649e-01 1.4192600548267365e-01
+ <_>
+
+ 0 -1 1976 -1.4132999815046787e-02
+
+ 8.7528401613235474e-01 8.5507996380329132e-02
+ <_>
+
+ 0 -1 1977 -6.7940000444650650e-03
+
+ -1.1649219989776611e+00 -3.3943001180887222e-02
+ <_>
+
+ 0 -1 1978 -5.2886001765727997e-02
+
+ 1.0930680036544800e+00 5.1187001168727875e-02
+ <_>
+
+ 0 -1 1979 -2.1079999860376120e-03
+
+ 1.3696199655532837e-01 -3.3849999308586121e-01
+ <_>
+
+ 0 -1 1980 1.8353000283241272e-02
+
+ 1.3661600649356842e-01 -4.0777799487113953e-01
+ <_>
+
+ 0 -1 1981 1.2671999633312225e-02
+
+ -1.4936000108718872e-02 -8.1707501411437988e-01
+ <_>
+
+ 0 -1 1982 1.2924999929964542e-02
+
+ 1.7625099420547485e-01 -3.2491698861122131e-01
+ <_>
+
+ 0 -1 1983 -1.7921000719070435e-02
+
+ -5.2745401859283447e-01 4.4443000108003616e-02
+ <_>
+
+ 0 -1 1984 1.9160000374540687e-03
+
+ -1.0978599637746811e-01 2.2067500650882721e-01
+ <_>
+
+ 0 -1 1985 -1.4697999693453312e-02
+
+ 3.9067798852920532e-01 -2.2224999964237213e-01
+ <_>
+
+ 0 -1 1986 -1.4972999691963196e-02
+
+ -2.5450900197029114e-01 1.7790000140666962e-01
+ <_>
+
+ 0 -1 1987 1.4636999927461147e-02
+
+ -2.5125000625848770e-02 -8.7121301889419556e-01
+ <_>
+
+ 0 -1 1988 -1.0974000208079815e-02
+
+ 7.9082798957824707e-01 2.0121000707149506e-02
+ <_>
+
+ 0 -1 1989 -9.1599998995661736e-03
+
+ -4.7906899452209473e-01 5.2232000976800919e-02
+ <_>
+
+ 0 -1 1990 4.6179997734725475e-03
+
+ -1.7244599759578705e-01 3.4527799487113953e-01
+ <_>
+
+ 0 -1 1991 2.3476999253034592e-02
+
+ 3.7760001141577959e-03 -6.5333700180053711e-01
+ <_>
+
+ 0 -1 1992 3.1766999512910843e-02
+
+ 1.6364000737667084e-02 5.8723700046539307e-01
+ <_>
+
+ 0 -1 1993 -1.8419999629259109e-02
+
+ 1.9993899762630463e-01 -3.2056498527526855e-01
+ <_>
+
+ 0 -1 1994 1.9543999806046486e-02
+
+ 1.8450200557708740e-01 -2.3793600499629974e-01
+ <_>
+
+ 0 -1 1995 4.1159498691558838e-01
+
+ -6.0382001101970673e-02 -1.6072119474411011e+00
+ <_>
+
+ 0 -1 1996 -4.1595999151468277e-02
+
+ -3.2756200432777405e-01 1.5058000385761261e-01
+ <_>
+
+ 0 -1 1997 -1.0335999540984631e-02
+
+ -6.2394398450851440e-01 1.3112000189721584e-02
+ <_>
+
+ 0 -1 1998 1.2392999604344368e-02
+
+ -3.3114999532699585e-02 5.5579900741577148e-01
+ <_>
+
+ 0 -1 1999 -8.7270000949501991e-03
+
+ 1.9883200526237488e-01 -3.7635600566864014e-01
+ <_>
+
+ 0 -1 2000 1.6295000910758972e-02
+
+ 2.0373000204563141e-01 -4.2800799012184143e-01
+ <_>
+
+ 0 -1 2001 -1.0483999736607075e-02
+
+ -5.6847000122070312e-01 4.4199001044034958e-02
+ <_>
+
+ 0 -1 2002 -1.2431999668478966e-02
+
+ 7.4641901254653931e-01 4.3678998947143555e-02
+ <_>
+
+ 0 -1 2003 -5.0374999642372131e-02
+
+ 8.5090100765228271e-01 -1.7773799598217010e-01
+ <_>
+
+ 0 -1 2004 4.9548000097274780e-02
+
+ 1.6784900426864624e-01 -2.9877498745918274e-01
+ <_>
+
+ 0 -1 2005 -4.1085001081228256e-02
+
+ -1.3302919864654541e+00 -4.9182001501321793e-02
+ <_>
+
+ 0 -1 2006 1.0069999843835831e-03
+
+ -6.0538999736309052e-02 1.8483200669288635e-01
+ <_>
+
+ 0 -1 2007 -5.0142999738454819e-02
+
+ 7.6447701454162598e-01 -1.8356999754905701e-01
+ <_>
+
+ 0 -1 2008 -8.7879998609423637e-03
+
+ 2.2655999660491943e-01 -6.3156999647617340e-02
+ <_>
+
+ 0 -1 2009 -5.0170999020338058e-02
+
+ -1.5899070501327515e+00 -6.1255000531673431e-02
+ <_>
+
+ 0 -1 2010 1.0216099768877029e-01
+
+ 1.2071800231933594e-01 -1.4120110273361206e+00
+ <_>
+
+ 0 -1 2011 -1.4372999779880047e-02
+
+ -1.3116970062255859e+00 -5.1936000585556030e-02
+ <_>
+
+ 0 -1 2012 1.0281999595463276e-02
+
+ -2.1639999467879534e-03 4.4247201085090637e-01
+ <_>
+
+ 0 -1 2013 -1.1814000084996223e-02
+
+ 6.5378099679946899e-01 -1.8723699450492859e-01
+ <_>
+
+ 0 -1 2014 7.2114996612071991e-02
+
+ 7.1846999228000641e-02 8.1496298313140869e-01
+ <_>
+
+ 0 -1 2015 -1.9001999869942665e-02
+
+ -6.7427200078964233e-01 -4.3200000072829425e-04
+ <_>
+
+ 0 -1 2016 -4.6990001574158669e-03
+
+ 3.3311501145362854e-01 5.5794000625610352e-02
+ <_>
+
+ 0 -1 2017 -5.8157000690698624e-02
+
+ 4.5572298765182495e-01 -2.0305100083351135e-01
+ <_>
+
+ 0 -1 2018 1.1360000353306532e-03
+
+ -4.4686999171972275e-02 2.2681899368762970e-01
+ <_>
+
+ 0 -1 2019 -4.9414999783039093e-02
+
+ 2.6694598793983459e-01 -2.6116999983787537e-01
+ <_>
+
+ 0 -1 2020 -1.1913800239562988e-01
+
+ -8.3017998933792114e-01 1.3248500227928162e-01
+ <_>
+
+ 0 -1 2021 -1.8303999677300453e-02
+
+ -6.7499202489852905e-01 1.7092000693082809e-02
+ <_>
+
+ 0 -1 2022 -7.9199997708201408e-03
+
+ -7.2287000715732574e-02 1.4425800740718842e-01
+ <_>
+
+ 0 -1 2023 5.1925998181104660e-02
+
+ 3.0921999365091324e-02 -5.5860602855682373e-01
+ <_>
+
+ 0 -1 2024 6.6724002361297607e-02
+
+ 1.3666400313377380e-01 -2.9411000013351440e-01
+ <_>
+
+ 0 -1 2025 -1.3778000138700008e-02
+
+ -5.9443902969360352e-01 1.5300000086426735e-02
+ <_>
+
+ 0 -1 2026 -1.7760999500751495e-02
+
+ 4.0496501326560974e-01 -3.3559999428689480e-03
+ <_>
+
+ 0 -1 2027 -4.2234998196363449e-02
+
+ -1.0897940397262573e+00 -4.0224999189376831e-02
+ <_>
+
+ 0 -1 2028 -1.3524999842047691e-02
+
+ 2.8921899199485779e-01 -2.5194799900054932e-01
+ <_>
+
+ 0 -1 2029 -1.1106000281870365e-02
+
+ 6.5312802791595459e-01 -1.8053700029850006e-01
+ <_>
+
+ 0 -1 2030 -1.2284599989652634e-01
+
+ -1.9570649862289429e+00 1.4815400540828705e-01
+ <_>
+
+ 0 -1 2031 4.7715999186038971e-02
+
+ -2.2875599563121796e-01 3.4233701229095459e-01
+ <_>
+
+ 0 -1 2032 3.1817000359296799e-02
+
+ 1.5976299345493317e-01 -1.0091969966888428e+00
+ <_>
+
+ 0 -1 2033 4.2570000514388084e-03
+
+ -3.8881298899650574e-01 8.4210000932216644e-02
+ <_>
+
+ 0 -1 2034 -6.1372999101877213e-02
+
+ 1.7152810096740723e+00 5.9324998408555984e-02
+ <_>
+
+ 0 -1 2035 -2.7030000928789377e-03
+
+ -3.8161700963973999e-01 8.5127003490924835e-02
+ <_>
+
+ 0 -1 2036 -6.8544000387191772e-02
+
+ -3.0925889015197754e+00 1.1788000166416168e-01
+ <_>
+
+ 0 -1 2037 1.0372500121593475e-01
+
+ -1.3769300282001495e-01 1.9009410142898560e+00
+ <_>
+
+ 0 -1 2038 1.5799000859260559e-02
+
+ -6.2660001218318939e-02 2.5917699933052063e-01
+ <_>
+
+ 0 -1 2039 -9.8040001466870308e-03
+
+ -5.6291598081588745e-01 4.3923001736402512e-02
+ <_>
+
+ 0 -1 2040 -9.0229995548725128e-03
+
+ 2.5287100672721863e-01 -4.1225999593734741e-02
+ <_>
+
+ 0 -1 2041 -6.3754998147487640e-02
+
+ -2.6178569793701172e+00 -7.4005998671054840e-02
+ <_>
+
+ 0 -1 2042 3.8954999297857285e-02
+
+ 5.9032998979091644e-02 8.5945600271224976e-01
+ <_>
+
+ 0 -1 2043 -3.9802998304367065e-02
+
+ 9.3600499629974365e-01 -1.5639400482177734e-01
+ <_>
+
+ 0 -1 2044 5.0301998853683472e-02
+
+ 1.3725900650024414e-01 -2.5549728870391846e+00
+ <_>
+
+ 0 -1 2045 4.6250000596046448e-02
+
+ -1.3964000158011913e-02 -7.1026200056076050e-01
+ <_>
+
+ 0 -1 2046 6.2196001410484314e-02
+
+ 5.9526000171899796e-02 1.6509100198745728e+00
+ <_>
+
+ 0 -1 2047 -6.4776003360748291e-02
+
+ 7.1368998289108276e-01 -1.7270000278949738e-01
+ <_>
+
+ 0 -1 2048 2.7522999793291092e-02
+
+ 1.4631600677967072e-01 -8.1428997218608856e-02
+ <_>
+
+ 0 -1 2049 3.9900001138448715e-04
+
+ -3.7144500017166138e-01 1.0152699798345566e-01
+ <_>
+
+ 0 -1 2050 -4.3299999088048935e-03
+
+ -2.3756299912929535e-01 2.6798400282859802e-01
+ <_>
+
+ 0 -1 2051 4.7297000885009766e-02
+
+ -2.7682000771164894e-02 -8.4910297393798828e-01
+ <_>
+
+ 0 -1 2052 1.2508999556303024e-02
+
+ 1.8730199337005615e-01 -5.6001102924346924e-01
+ <_>
+
+ 0 -1 2053 4.5899000018835068e-02
+
+ -1.5601199865341187e-01 9.7073000669479370e-01
+ <_>
+
+ 0 -1 2054 1.9853399693965912e-01
+
+ 1.4895500242710114e-01 -1.1015529632568359e+00
+ <_>
+
+ 0 -1 2055 1.6674999147653580e-02
+
+ -1.6615299880504608e-01 8.2210999727249146e-01
+ <_>
+
+ 0 -1 2056 1.9829999655485153e-03
+
+ -7.1249999105930328e-02 2.8810900449752808e-01
+ <_>
+
+ 0 -1 2057 2.2447999566793442e-02
+
+ -2.0981000736355782e-02 -7.8416502475738525e-01
+ <_>
+
+ 0 -1 2058 -1.3913000002503395e-02
+
+ -1.8165799975395203e-01 2.0491799712181091e-01
+ <_>
+
+ 0 -1 2059 -7.7659999951720238e-03
+
+ -4.5595899224281311e-01 6.3576996326446533e-02
+ <_>
+
+ 0 -1 2060 -1.3209000229835510e-02
+
+ 2.6632300019264221e-01 -1.7795999348163605e-01
+ <_>
+
+ 0 -1 2061 4.9052998423576355e-02
+
+ -1.5476800501346588e-01 1.1069979667663574e+00
+ <_>
+
+ 0 -1 2062 2.0263999700546265e-02
+
+ 6.8915002048015594e-02 6.9867497682571411e-01
+ <_>
+
+ 0 -1 2063 -1.6828000545501709e-02
+
+ 2.7607199549674988e-01 -2.5139200687408447e-01
+ <_>
+
+ 0 -1 2064 -1.6939499974250793e-01
+
+ -3.0767529010772705e+00 1.1617500334978104e-01
+ <_>
+
+ 0 -1 2065 -1.1336100101470947e-01
+
+ -1.4639229774475098e+00 -5.1447000354528427e-02
+ <_>
+
+ 0 -1 2066 -7.7685996890068054e-02
+
+ 8.8430202007293701e-01 4.3306998908519745e-02
+ <_>
+
+ 0 -1 2067 -1.5568000264465809e-02
+
+ 1.3672499358654022e-01 -3.4505501389503479e-01
+ <_>
+
+ 0 -1 2068 -6.6018998622894287e-02
+
+ -1.0300110578536987e+00 1.1601399630308151e-01
+ <_>
+
+ 0 -1 2069 8.3699999377131462e-03
+
+ 7.6429001986980438e-02 -4.4002500176429749e-01
+ <_>
+
+ 0 -1 2070 3.5402998328208923e-02
+
+ 1.1979500204324722e-01 -7.2668302059173584e-01
+ <_>
+
+ 0 -1 2071 -3.9051000028848648e-02
+
+ 6.7375302314758301e-01 -1.8196000158786774e-01
+ <_>
+
+ 0 -1 2072 -9.7899995744228363e-03
+
+ 2.1264599263668060e-01 3.6756001412868500e-02
+ <_>
+
+ 0 -1 2073 -2.3047000169754028e-02
+
+ 4.4742199778556824e-01 -2.0986700057983398e-01
+ <_>
+
+ 0 -1 2074 3.1169999856501818e-03
+
+ 3.7544000893831253e-02 2.7808201313018799e-01
+ <_>
+
+ 0 -1 2075 1.3136000372469425e-02
+
+ -1.9842399656772614e-01 5.4335701465606689e-01
+ <_>
+
+ 0 -1 2076 1.4782000333070755e-02
+
+ 1.3530600070953369e-01 -1.1153600364923477e-01
+ <_>
+
+ 0 -1 2077 -6.0139000415802002e-02
+
+ 8.4039300680160522e-01 -1.6711600124835968e-01
+ <_>
+
+ 0 -1 2078 5.1998998969793320e-02
+
+ 1.7372000217437744e-01 -7.8547602891921997e-01
+ <_>
+
+ 0 -1 2079 2.4792000651359558e-02
+
+ -1.7739200592041016e-01 6.6752600669860840e-01
+ <_>
+
+ 0 -1 2080 -1.2014999985694885e-02
+
+ -1.4263699948787689e-01 1.6070500016212463e-01
+ <_>
+
+ 0 -1 2081 -9.8655998706817627e-02
+
+ 1.0429769754409790e+00 -1.5770199894905090e-01
+ <_>
+
+ 0 -1 2082 1.1758299916982651e-01
+
+ 1.0955700278282166e-01 -4.4920377731323242e+00
+ <_>
+
+ 0 -1 2083 -1.8922999501228333e-02
+
+ -7.8543400764465332e-01 1.2984000146389008e-02
+ <_>
+
+ 0 -1 2084 -2.8390999883413315e-02
+
+ -6.0569900274276733e-01 1.2903499603271484e-01
+ <_>
+
+ 0 -1 2085 1.3182999566197395e-02
+
+ -1.4415999874472618e-02 -7.3210501670837402e-01
+ <_>
+
+ 0 -1 2086 -1.1653000116348267e-01
+
+ -2.0442469120025635e+00 1.4053100347518921e-01
+ <_>
+
+ 0 -1 2087 -3.8880000356584787e-03
+
+ -4.1861599683761597e-01 7.8704997897148132e-02
+ <_>
+
+ 0 -1 2088 3.1229000538587570e-02
+
+ 2.4632999673485756e-02 4.1870400309562683e-01
+ <_>
+
+ 0 -1 2089 2.5198999792337418e-02
+
+ -1.7557799816131592e-01 6.4710599184036255e-01
+ <_>
+
+ 0 -1 2090 -2.8124000877141953e-02
+
+ -2.2005599737167358e-01 1.4121000468730927e-01
+ <_>
+
+ 0 -1 2091 3.6499001085758209e-02
+
+ -6.8426996469497681e-02 -2.3410849571228027e+00
+ <_>
+
+ 0 -1 2092 -7.2292998433113098e-02
+
+ 1.2898750305175781e+00 8.4875002503395081e-02
+ <_>
+
+ 0 -1 2093 -4.1671000421047211e-02
+
+ -1.1630970239639282e+00 -5.3752999752759933e-02
+ <_>
+
+ 0 -1 2094 4.7703001648187637e-02
+
+ 7.0101000368595123e-02 7.3676502704620361e-01
+ <_>
+
+ 0 -1 2095 6.5793000161647797e-02
+
+ -1.7755299806594849e-01 6.9780498743057251e-01
+ <_>
+
+ 0 -1 2096 1.3904999941587448e-02
+
+ 2.1936799585819244e-01 -2.0390799641609192e-01
+ <_>
+
+ 0 -1 2097 -2.7730999514460564e-02
+
+ 6.1867898702621460e-01 -1.7804099619388580e-01
+ <_>
+
+ 0 -1 2098 -1.5879999846220016e-02
+
+ -4.6484100818634033e-01 1.8828600645065308e-01
+ <_>
+
+ 0 -1 2099 7.4128001928329468e-02
+
+ -1.2858100235462189e-01 3.2792479991912842e+00
+ <_>
+
+ 0 -1 2100 -8.9000002481043339e-04
+
+ -3.0117601156234741e-01 2.3818799853324890e-01
+ <_>
+
+ 0 -1 2101 1.7965000122785568e-02
+
+ -2.2284999489784241e-01 2.9954001307487488e-01
+ <_>
+
+ 0 -1 2102 -2.5380000006407499e-03
+
+ 2.5064399838447571e-01 -1.3665600121021271e-01
+ <_>
+
+ 0 -1 2103 -9.0680001303553581e-03
+
+ 2.9017499089241028e-01 -2.8929701447486877e-01
+ <_>
+
+ 0 -1 2104 4.9169998615980148e-02
+
+ 1.9156399369239807e-01 -6.8328702449798584e-01
+ <_>
+
+ 0 -1 2105 -3.0680999159812927e-02
+
+ -7.5677001476287842e-01 -1.3279999606311321e-02
+ <_>
+
+ 0 -1 2106 1.0017400234937668e-01
+
+ 8.4453999996185303e-02 1.0888710021972656e+00
+ <_>
+
+ 0 -1 2107 3.1950001139193773e-03
+
+ -2.6919400691986084e-01 1.9537900388240814e-01
+ <_>
+
+ 0 -1 2108 3.5503000020980835e-02
+
+ 1.3632300496101379e-01 -5.6917202472686768e-01
+ <_>
+
+ 0 -1 2109 4.5900000259280205e-04
+
+ -4.0443998575210571e-01 1.4074799418449402e-01
+ <_>
+
+ 0 -1 2110 2.5258999317884445e-02
+
+ 1.6243200004100800e-01 -5.5741798877716064e-01
+ <_>
+
+ 0 -1 2111 -5.1549999043345451e-03
+
+ 3.1132599711418152e-01 -2.2756099700927734e-01
+ <_>
+
+ 0 -1 2112 1.5869999770075083e-03
+
+ -2.6867699623107910e-01 1.9565400481224060e-01
+ <_>
+
+ 0 -1 2113 -1.6204999759793282e-02
+
+ 1.5486499667167664e-01 -3.4057798981666565e-01
+ <_>
+
+ 0 -1 2114 -2.9624000191688538e-02
+
+ 1.1466799974441528e+00 9.0557999908924103e-02
+ <_>
+
+ 0 -1 2115 -1.5930000226944685e-03
+
+ -7.1257501840591431e-01 -7.0400000549852848e-04
+ <_>
+
+ 0 -1 2116 -5.4019000381231308e-02
+
+ 4.1537499427795410e-01 2.7246000245213509e-02
+ <_>
+
+ 0 -1 2117 -6.6211000084877014e-02
+
+ -1.3340090513229370e+00 -4.7352999448776245e-02
+ <_>
+
+ 0 -1 2118 2.7940999716520309e-02
+
+ 1.4446300268173218e-01 -5.1518398523330688e-01
+ <_>
+
+ 0 -1 2119 2.8957000002264977e-02
+
+ -4.9966000020503998e-02 -1.1929039955139160e+00
+ <_>
+
+ 0 -1 2120 -2.0424999296665192e-02
+
+ 6.3881301879882812e-01 3.8141001015901566e-02
+ <_>
+
+ 0 -1 2121 1.2416999787092209e-02
+
+ -2.1547000110149384e-01 4.9477699398994446e-01
+ <_>
+ 181
+ -3.3196411132812500e+00
+
+ <_>
+
+ 0 -1 2122 4.3274000287055969e-02
+
+ -8.0494397878646851e-01 3.9897298812866211e-01
+ <_>
+
+ 0 -1 2123 1.8615500628948212e-01
+
+ -3.1655299663543701e-01 6.8877297639846802e-01
+ <_>
+
+ 0 -1 2124 3.1860999763011932e-02
+
+ -6.4266198873519897e-01 2.5550898909568787e-01
+ <_>
+
+ 0 -1 2125 1.4022000133991241e-02
+
+ -4.5926600694656372e-01 3.1171199679374695e-01
+ <_>
+
+ 0 -1 2126 -6.3029997982084751e-03
+
+ 4.6026900410652161e-01 -2.7438500523567200e-01
+ <_>
+
+ 0 -1 2127 -5.4310001432895660e-03
+
+ 3.6608600616455078e-01 -2.7205801010131836e-01
+ <_>
+
+ 0 -1 2128 1.6822999343276024e-02
+
+ 2.3476999253034592e-02 -8.8443797826766968e-01
+ <_>
+
+ 0 -1 2129 2.6039000600576401e-02
+
+ 1.7488799989223480e-01 -5.4564702510833740e-01
+ <_>
+
+ 0 -1 2130 -2.6720000430941582e-02
+
+ -9.6396499872207642e-01 2.3524999618530273e-02
+ <_>
+
+ 0 -1 2131 -1.7041999846696854e-02
+
+ -7.0848798751831055e-01 2.1468099951744080e-01
+ <_>
+
+ 0 -1 2132 5.9569999575614929e-03
+
+ 7.3601000010967255e-02 -6.8225598335266113e-01
+ <_>
+
+ 0 -1 2133 -2.8679999522864819e-03
+
+ -7.4935001134872437e-01 2.3803399503231049e-01
+ <_>
+
+ 0 -1 2134 -4.3774999678134918e-02
+
+ 6.8323302268981934e-01 -2.1380299329757690e-01
+ <_>
+
+ 0 -1 2135 5.1633000373840332e-02
+
+ -1.2566499412059784e-01 6.7523801326751709e-01
+ <_>
+
+ 0 -1 2136 8.1780003383755684e-03
+
+ 7.0689998567104340e-02 -8.0665898323059082e-01
+ <_>
+
+ 0 -1 2137 -5.2841998636722565e-02
+
+ 9.5433902740478516e-01 1.6548000276088715e-02
+ <_>
+
+ 0 -1 2138 5.2583999931812286e-02
+
+ -2.8414401412010193e-01 4.7129800915718079e-01
+ <_>
+
+ 0 -1 2139 -1.2659000232815742e-02
+
+ 3.8445401191711426e-01 -6.2288001179695129e-02
+ <_>
+
+ 0 -1 2140 1.1694000102579594e-02
+
+ 5.6000000768108293e-05 -1.0173139572143555e+00
+ <_>
+
+ 0 -1 2141 -2.3918999359011650e-02
+
+ 8.4921300411224365e-01 5.7399999350309372e-03
+ <_>
+
+ 0 -1 2142 -6.1673998832702637e-02
+
+ -9.2571401596069336e-01 -1.7679999582469463e-03
+ <_>
+
+ 0 -1 2143 -1.8279999494552612e-03
+
+ -5.4372298717498779e-01 2.4932399392127991e-01
+ <_>
+
+ 0 -1 2144 3.5257998853921890e-02
+
+ -7.3719997890293598e-03 -9.3963998556137085e-01
+ <_>
+
+ 0 -1 2145 -1.8438000231981277e-02
+
+ 7.2136700153350830e-01 1.0491999797523022e-02
+ <_>
+
+ 0 -1 2146 -3.8389001041650772e-02
+
+ 1.9272600114345551e-01 -3.5832101106643677e-01
+ <_>
+
+ 0 -1 2147 9.9720999598503113e-02
+
+ 1.1354199796915054e-01 -1.6304190158843994e+00
+ <_>
+
+ 0 -1 2148 8.4462001919746399e-02
+
+ -5.3420998156070709e-02 -1.6981120109558105e+00
+ <_>
+
+ 0 -1 2149 4.0270000696182251e-02
+
+ -1.0783199965953827e-01 5.1926600933074951e-01
+ <_>
+
+ 0 -1 2150 5.8935999870300293e-02
+
+ -1.8053700029850006e-01 9.5119798183441162e-01
+ <_>
+
+ 0 -1 2151 1.4957000315189362e-01
+
+ 1.6785299777984619e-01 -1.1591869592666626e+00
+ <_>
+
+ 0 -1 2152 6.9399998756125569e-04
+
+ 2.0491400361061096e-01 -3.3118200302124023e-01
+ <_>
+
+ 0 -1 2153 -3.3369001001119614e-02
+
+ 9.3468099832534790e-01 -2.9639999847859144e-03
+ <_>
+
+ 0 -1 2154 9.3759996816515923e-03
+
+ 3.7000000011175871e-03 -7.7549797296524048e-01
+ <_>
+
+ 0 -1 2155 4.3193999677896500e-02
+
+ -2.2040000185370445e-03 7.4589699506759644e-01
+ <_>
+
+ 0 -1 2156 -6.7555002868175507e-02
+
+ 7.2292101383209229e-01 -1.8404200673103333e-01
+ <_>
+
+ 0 -1 2157 -3.1168600916862488e-01
+
+ 1.0014270544052124e+00 3.4003000706434250e-02
+ <_>
+
+ 0 -1 2158 2.9743999242782593e-02
+
+ -4.6356000006198883e-02 -1.2781809568405151e+00
+ <_>
+
+ 0 -1 2159 1.0737000033259392e-02
+
+ 1.4812000095844269e-02 6.6649997234344482e-01
+ <_>
+
+ 0 -1 2160 -2.8841000050306320e-02
+
+ -9.4222599267959595e-01 -2.0796999335289001e-02
+ <_>
+
+ 0 -1 2161 -5.7649998925626278e-03
+
+ -4.3541899323463440e-01 2.3386000096797943e-01
+ <_>
+
+ 0 -1 2162 2.8410999104380608e-02
+
+ -1.7615799605846405e-01 8.5765302181243896e-01
+ <_>
+
+ 0 -1 2163 -2.9007999226450920e-02
+
+ 5.7978099584579468e-01 2.8565999120473862e-02
+ <_>
+
+ 0 -1 2164 2.4965999647974968e-02
+
+ -2.2729000076651573e-02 -9.6773099899291992e-01
+ <_>
+
+ 0 -1 2165 1.2036000378429890e-02
+
+ -1.4214700460433960e-01 5.1687997579574585e-01
+ <_>
+
+ 0 -1 2166 -4.2514000087976456e-02
+
+ 9.7273802757263184e-01 -1.8119800090789795e-01
+ <_>
+
+ 0 -1 2167 1.0276000015437603e-02
+
+ -8.3099998533725739e-02 3.1762799620628357e-01
+ <_>
+
+ 0 -1 2168 -6.9191999733448029e-02
+
+ -2.0668580532073975e+00 -6.0173999518156052e-02
+ <_>
+
+ 0 -1 2169 -4.6769999898970127e-03
+
+ 4.4131800532341003e-01 2.3209000006318092e-02
+ <_>
+
+ 0 -1 2170 -1.3923999853432178e-02
+
+ 2.8606700897216797e-01 -2.9152700304985046e-01
+ <_>
+
+ 0 -1 2171 -1.5333999879658222e-02
+
+ -5.7414501905441284e-01 2.3063300549983978e-01
+ <_>
+
+ 0 -1 2172 -1.0239000432193279e-02
+
+ 3.4479200839996338e-01 -2.6080399751663208e-01
+ <_>
+
+ 0 -1 2173 -5.0988998264074326e-02
+
+ 5.6154102087020874e-01 6.1218999326229095e-02
+ <_>
+
+ 0 -1 2174 3.0689999461174011e-02
+
+ -1.4772799611091614e-01 1.6378489732742310e+00
+ <_>
+
+ 0 -1 2175 -1.1223999783396721e-02
+
+ 2.4006199836730957e-01 -4.4864898920059204e-01
+ <_>
+
+ 0 -1 2176 -6.2899999320507050e-03
+
+ 4.3119499087333679e-01 -2.3808999359607697e-01
+ <_>
+
+ 0 -1 2177 7.8590996563434601e-02
+
+ 1.9865000620484352e-02 8.0853801965713501e-01
+ <_>
+
+ 0 -1 2178 -1.0178999975323677e-02
+
+ 1.8193200230598450e-01 -3.2877799868583679e-01
+ <_>
+
+ 0 -1 2179 3.1227000057697296e-02
+
+ 1.4973899722099304e-01 -1.4180339574813843e+00
+ <_>
+
+ 0 -1 2180 4.0196999907493591e-02
+
+ -1.9760499894618988e-01 5.8508199453353882e-01
+ <_>
+
+ 0 -1 2181 1.6138000413775444e-02
+
+ 5.0000002374872565e-04 3.9050000905990601e-01
+ <_>
+
+ 0 -1 2182 -4.5519001781940460e-02
+
+ 1.2646820545196533e+00 -1.5632599592208862e-01
+ <_>
+
+ 0 -1 2183 -1.8130000680685043e-02
+
+ 6.5148502588272095e-01 1.0235999710857868e-02
+ <_>
+
+ 0 -1 2184 -1.4001999981701374e-02
+
+ -1.0344820022583008e+00 -3.2182998955249786e-02
+ <_>
+
+ 0 -1 2185 -3.8816001266241074e-02
+
+ -4.7874298691749573e-01 1.6290700435638428e-01
+ <_>
+
+ 0 -1 2186 3.1656000763177872e-02
+
+ -2.0983399450778961e-01 5.4575902223587036e-01
+ <_>
+
+ 0 -1 2187 -1.0839999653398991e-02
+
+ 5.1898801326751709e-01 -1.5080000273883343e-02
+ <_>
+
+ 0 -1 2188 1.2032999657094479e-02
+
+ -2.1107600629329681e-01 7.5937002897262573e-01
+ <_>
+
+ 0 -1 2189 7.0772998034954071e-02
+
+ 1.8048800528049469e-01 -7.4048501253128052e-01
+ <_>
+
+ 0 -1 2190 5.3139799833297729e-01
+
+ -1.4491699635982513e-01 1.5360039472579956e+00
+ <_>
+
+ 0 -1 2191 -1.4774000272154808e-02
+
+ -2.8153699636459351e-01 2.0407299697399139e-01
+ <_>
+
+ 0 -1 2192 -2.2410000674426556e-03
+
+ -4.4876301288604736e-01 5.3989000618457794e-02
+ <_>
+
+ 0 -1 2193 4.9968000501394272e-02
+
+ 4.1514001786708832e-02 2.9417100548744202e-01
+ <_>
+
+ 0 -1 2194 -4.7701999545097351e-02
+
+ 3.9674299955368042e-01 -2.8301799297332764e-01
+ <_>
+
+ 0 -1 2195 -9.1311000287532806e-02
+
+ 2.1994259357452393e+00 8.7964996695518494e-02
+ <_>
+
+ 0 -1 2196 3.8070000708103180e-02
+
+ -2.8025600314140320e-01 2.5156199932098389e-01
+ <_>
+
+ 0 -1 2197 -1.5538999810814857e-02
+
+ 3.4157499670982361e-01 1.7924999818205833e-02
+ <_>
+
+ 0 -1 2198 -1.5445999801158905e-02
+
+ 2.8680199384689331e-01 -2.5135898590087891e-01
+ <_>
+
+ 0 -1 2199 -5.7388000190258026e-02
+
+ 6.3830000162124634e-01 8.8597998023033142e-02
+ <_>
+
+ 0 -1 2200 -5.9440000914037228e-03
+
+ 7.9016998410224915e-02 -4.0774899721145630e-01
+ <_>
+
+ 0 -1 2201 -6.9968998432159424e-02
+
+ -4.4644200801849365e-01 1.7219600081443787e-01
+ <_>
+
+ 0 -1 2202 -2.5064999237656593e-02
+
+ -9.8270201683044434e-01 -3.5388000309467316e-02
+ <_>
+
+ 0 -1 2203 1.7216000705957413e-02
+
+ 2.2705900669097900e-01 -8.0550098419189453e-01
+ <_>
+
+ 0 -1 2204 -4.4279001653194427e-02
+
+ 8.3951997756958008e-01 -1.7429600656032562e-01
+ <_>
+
+ 0 -1 2205 4.3988998979330063e-02
+
+ 1.1557199805974960e-01 -1.9666889905929565e+00
+ <_>
+
+ 0 -1 2206 1.5907000750303268e-02
+
+ -3.7576001137495041e-02 -1.0311100482940674e+00
+ <_>
+
+ 0 -1 2207 -9.2754997313022614e-02
+
+ -1.3530019521713257e+00 1.2141299992799759e-01
+ <_>
+
+ 0 -1 2208 7.1037001907825470e-02
+
+ -1.7684300243854523e-01 7.4485200643539429e-01
+ <_>
+
+ 0 -1 2209 5.7762000709772110e-02
+
+ 1.2835599482059479e-01 -4.4444200396537781e-01
+ <_>
+
+ 0 -1 2210 -1.6432000324130058e-02
+
+ 8.0152702331542969e-01 -1.7491699755191803e-01
+ <_>
+
+ 0 -1 2211 2.3939000442624092e-02
+
+ 1.6144999861717224e-01 -1.2364500015974045e-01
+ <_>
+
+ 0 -1 2212 1.2636000290513039e-02
+
+ 1.5411999821662903e-01 -3.3293798565864563e-01
+ <_>
+
+ 0 -1 2213 -5.4347999393939972e-02
+
+ -1.8400700092315674e+00 1.4835999906063080e-01
+ <_>
+
+ 0 -1 2214 -1.3261999934911728e-02
+
+ -8.0838799476623535e-01 -2.7726000174880028e-02
+ <_>
+
+ 0 -1 2215 6.1340001411736012e-03
+
+ -1.3785000145435333e-01 3.2858499884605408e-01
+ <_>
+
+ 0 -1 2216 2.8991000726819038e-02
+
+ -2.5516999885439873e-02 -8.3387202024459839e-01
+ <_>
+
+ 0 -1 2217 -2.1986000239849091e-02
+
+ -7.3739999532699585e-01 1.7887100577354431e-01
+ <_>
+
+ 0 -1 2218 5.3269998170435429e-03
+
+ -4.5449298620223999e-01 6.8791002035140991e-02
+ <_>
+
+ 0 -1 2219 8.6047999560832977e-02
+
+ 2.1008500456809998e-01 -3.7808901071548462e-01
+ <_>
+
+ 0 -1 2220 -8.5549997165799141e-03
+
+ 4.0134999155998230e-01 -2.1074099838733673e-01
+ <_>
+
+ 0 -1 2221 6.7790001630783081e-03
+
+ -2.1648999303579330e-02 4.5421499013900757e-01
+ <_>
+
+ 0 -1 2222 -6.3959998078644276e-03
+
+ -4.9818599224090576e-01 7.5907997786998749e-02
+ <_>
+
+ 0 -1 2223 8.9469999074935913e-03
+
+ 1.7857700586318970e-01 -2.8454899787902832e-01
+ <_>
+
+ 0 -1 2224 3.2589999027550220e-03
+
+ 4.6624999493360519e-02 -5.5206298828125000e-01
+ <_>
+
+ 0 -1 2225 4.1476998478174210e-02
+
+ 1.7550499737262726e-01 -2.0703999698162079e-01
+ <_>
+
+ 0 -1 2226 -6.7449999041855335e-03
+
+ -4.6392598748207092e-01 6.9303996860980988e-02
+ <_>
+
+ 0 -1 2227 3.0564999207854271e-02
+
+ 5.1734998822212219e-02 7.5550502538681030e-01
+ <_>
+
+ 0 -1 2228 -7.4780001305043697e-03
+
+ 1.4893899857997894e-01 -3.1906801462173462e-01
+ <_>
+
+ 0 -1 2229 8.9088998734951019e-02
+
+ 1.3738800585269928e-01 -1.1379710435867310e+00
+ <_>
+
+ 0 -1 2230 7.3230001144111156e-03
+
+ -2.8829199075698853e-01 1.9088600575923920e-01
+ <_>
+
+ 0 -1 2231 -1.8205000087618828e-02
+
+ -3.0178600549697876e-01 1.6795800626277924e-01
+ <_>
+
+ 0 -1 2232 -2.5828000158071518e-02
+
+ -9.8137998580932617e-01 -1.9860999658703804e-02
+ <_>
+
+ 0 -1 2233 1.0936199873685837e-01
+
+ 4.8790000379085541e-02 5.3118300437927246e-01
+ <_>
+
+ 0 -1 2234 -1.1424999684095383e-02
+
+ 2.3705999553203583e-01 -2.7925300598144531e-01
+ <_>
+
+ 0 -1 2235 -5.7565998286008835e-02
+
+ 4.7255399823188782e-01 6.5171003341674805e-02
+ <_>
+
+ 0 -1 2236 1.0278300195932388e-01
+
+ -2.0765100419521332e-01 5.0947701930999756e-01
+ <_>
+
+ 0 -1 2237 2.7041999623179436e-02
+
+ 1.6421200335025787e-01 -1.4508620500564575e+00
+ <_>
+
+ 0 -1 2238 -1.3635000213980675e-02
+
+ -5.6543898582458496e-01 2.3788999766111374e-02
+ <_>
+
+ 0 -1 2239 -3.2158198952674866e-01
+
+ -3.5602829456329346e+00 1.1801300197839737e-01
+ <_>
+
+ 0 -1 2240 2.0458100736141205e-01
+
+ -3.7016000598669052e-02 -1.0225499868392944e+00
+ <_>
+
+ 0 -1 2241 -7.0347003638744354e-02
+
+ -5.6491899490356445e-01 1.8525199592113495e-01
+ <_>
+
+ 0 -1 2242 3.7831000983715057e-02
+
+ -2.9901999980211258e-02 -8.2921499013900757e-01
+ <_>
+
+ 0 -1 2243 -7.0298001170158386e-02
+
+ -5.3172302246093750e-01 1.4430199563503265e-01
+ <_>
+
+ 0 -1 2244 6.3221000134944916e-02
+
+ -2.2041200101375580e-01 4.7952198982238770e-01
+ <_>
+
+ 0 -1 2245 3.6393001675605774e-02
+
+ 1.4222699403762817e-01 -6.1193901300430298e-01
+ <_>
+
+ 0 -1 2246 4.0099998004734516e-03
+
+ -3.4560799598693848e-01 1.1738699674606323e-01
+ <_>
+
+ 0 -1 2247 -4.9106001853942871e-02
+
+ 9.5984101295471191e-01 6.4934998750686646e-02
+ <_>
+
+ 0 -1 2248 -7.1583002805709839e-02
+
+ 1.7385669946670532e+00 -1.4252899587154388e-01
+ <_>
+
+ 0 -1 2249 -3.8008999079465866e-02
+
+ 1.3872820138931274e+00 6.6188000142574310e-02
+ <_>
+
+ 0 -1 2250 -3.1570000573992729e-03
+
+ 5.3677000105381012e-02 -5.4048001766204834e-01
+ <_>
+
+ 0 -1 2251 1.9458999857306480e-02
+
+ -9.3620002269744873e-02 3.9131000638008118e-01
+ <_>
+
+ 0 -1 2252 1.1293999850749969e-02
+
+ 3.7223998457193375e-02 -5.4251801967620850e-01
+ <_>
+
+ 0 -1 2253 -3.3495001494884491e-02
+
+ 9.5307898521423340e-01 3.7696998566389084e-02
+ <_>
+
+ 0 -1 2254 9.2035003006458282e-02
+
+ -1.3488399982452393e-01 2.2897069454193115e+00
+ <_>
+
+ 0 -1 2255 3.7529999390244484e-03
+
+ 2.2824199497699738e-01 -5.9983700513839722e-01
+ <_>
+
+ 0 -1 2256 1.2848000042140484e-02
+
+ -2.2005200386047363e-01 3.7221899628639221e-01
+ <_>
+
+ 0 -1 2257 -1.4316199719905853e-01
+
+ 1.2855789661407471e+00 4.7237001359462738e-02
+ <_>
+
+ 0 -1 2258 -9.6879996359348297e-02
+
+ -3.9550929069519043e+00 -7.2903998196125031e-02
+ <_>
+
+ 0 -1 2259 -8.8459998369216919e-03
+
+ 3.7674999237060547e-01 -4.6484000980854034e-02
+ <_>
+
+ 0 -1 2260 1.5900000929832458e-02
+
+ -2.4457000195980072e-02 -8.0034798383712769e-01
+ <_>
+
+ 0 -1 2261 7.0372000336647034e-02
+
+ 1.7019000649452209e-01 -6.3068997859954834e-01
+ <_>
+
+ 0 -1 2262 -3.7953998893499374e-02
+
+ -9.3667197227478027e-01 -4.1214000433683395e-02
+ <_>
+
+ 0 -1 2263 5.1597899198532104e-01
+
+ 1.3080599904060364e-01 -1.5802290439605713e+00
+ <_>
+
+ 0 -1 2264 -3.2843001186847687e-02
+
+ -1.1441620588302612e+00 -4.9173999577760696e-02
+ <_>
+
+ 0 -1 2265 -3.6357000470161438e-02
+
+ 4.9606400728225708e-01 -3.4458998590707779e-02
+ <_>
+
+ 0 -1 2266 6.8080001510679722e-03
+
+ -3.0997800827026367e-01 1.7054800689220428e-01
+ <_>
+
+ 0 -1 2267 -1.6114000231027603e-02
+
+ -3.7904599308967590e-01 1.6078999638557434e-01
+ <_>
+
+ 0 -1 2268 8.4530003368854523e-03
+
+ -1.8655499815940857e-01 5.6367701292037964e-01
+ <_>
+
+ 0 -1 2269 -1.3752399384975433e-01
+
+ -5.8989900350570679e-01 1.1749500036239624e-01
+ <_>
+
+ 0 -1 2270 1.7688000202178955e-01
+
+ -1.5424899756908417e-01 9.2911100387573242e-01
+ <_>
+
+ 0 -1 2271 7.9309996217489243e-03
+
+ 3.2190701365470886e-01 -1.6392600536346436e-01
+ <_>
+
+ 0 -1 2272 1.0971800237894058e-01
+
+ -1.5876500308513641e-01 1.0186259746551514e+00
+ <_>
+
+ 0 -1 2273 -3.0293000862002373e-02
+
+ 7.5587302446365356e-01 3.1794998794794083e-02
+ <_>
+
+ 0 -1 2274 -2.3118000477552414e-02
+
+ -8.8451498746871948e-01 -9.5039997249841690e-03
+ <_>
+
+ 0 -1 2275 -3.0900000128895044e-03
+
+ 2.3838299512863159e-01 -1.1606200039386749e-01
+ <_>
+
+ 0 -1 2276 -3.3392000943422318e-02
+
+ -1.8738139867782593e+00 -6.8502999842166901e-02
+ <_>
+
+ 0 -1 2277 1.3190000317990780e-02
+
+ 1.2919899821281433e-01 -6.7512202262878418e-01
+ <_>
+
+ 0 -1 2278 1.4661000110208988e-02
+
+ -2.4829000234603882e-02 -7.4396800994873047e-01
+ <_>
+
+ 0 -1 2279 -1.3248000293970108e-02
+
+ 4.6820199489593506e-01 -2.4165000766515732e-02
+ <_>
+
+ 0 -1 2280 -1.6218999400734901e-02
+
+ 4.0083798766136169e-01 -2.1255700290203094e-01
+ <_>
+
+ 0 -1 2281 -2.9052000492811203e-02
+
+ -1.5650019645690918e+00 1.4375899732112885e-01
+ <_>
+
+ 0 -1 2282 -1.0153199732303619e-01
+
+ -1.9220689535140991e+00 -6.9559998810291290e-02
+ <_>
+
+ 0 -1 2283 3.7753999233245850e-02
+
+ 1.3396799564361572e-01 -2.2639141082763672e+00
+ <_>
+
+ 0 -1 2284 -2.8555598855018616e-01
+
+ 1.0215270519256592e+00 -1.5232199430465698e-01
+ <_>
+
+ 0 -1 2285 1.5360699594020844e-01
+
+ -9.7409002482891083e-02 4.1662400960922241e-01
+ <_>
+
+ 0 -1 2286 -2.1199999901000410e-04
+
+ 1.1271899938583374e-01 -4.1653999686241150e-01
+ <_>
+
+ 0 -1 2287 -2.0597999915480614e-02
+
+ 6.0540497303009033e-01 6.2467999756336212e-02
+ <_>
+
+ 0 -1 2288 3.7353999912738800e-02
+
+ -1.8919000029563904e-01 4.6464699506759644e-01
+ <_>
+
+ 0 -1 2289 5.7275000959634781e-02
+
+ 1.1565300077199936e-01 -1.3213009834289551e+00
+ <_>
+
+ 0 -1 2290 5.1029999740421772e-03
+
+ -2.8061500191688538e-01 1.9313399493694305e-01
+ <_>
+
+ 0 -1 2291 -5.4644998162984848e-02
+
+ 7.2428500652313232e-01 7.5447998940944672e-02
+ <_>
+
+ 0 -1 2292 2.5349000468850136e-02
+
+ -1.9481800496578217e-01 4.6032801270484924e-01
+ <_>
+
+ 0 -1 2293 2.4311000481247902e-02
+
+ 1.5564100444316864e-01 -4.9913901090621948e-01
+ <_>
+
+ 0 -1 2294 3.5962000489234924e-02
+
+ -5.8573000133037567e-02 -1.5418399572372437e+00
+ <_>
+
+ 0 -1 2295 -1.0000699758529663e-01
+
+ -1.6100039482116699e+00 1.1450500041246414e-01
+ <_>
+
+ 0 -1 2296 8.4435999393463135e-02
+
+ -6.1406999826431274e-02 -1.4673349857330322e+00
+ <_>
+
+ 0 -1 2297 1.5947999432682991e-02
+
+ 1.6287900507450104e-01 -1.1026400327682495e-01
+ <_>
+
+ 0 -1 2298 3.3824000507593155e-02
+
+ -1.7932699620723724e-01 5.7218402624130249e-01
+ <_>
+
+ 0 -1 2299 -6.1996001750230789e-02
+
+ 4.6511812210083008e+00 9.4534002244472504e-02
+ <_>
+
+ 0 -1 2300 6.9876998662948608e-02
+
+ -1.6985900700092316e-01 8.7028998136520386e-01
+ <_>
+
+ 0 -1 2301 -2.7916999533772469e-02
+
+ 9.1042500734329224e-01 5.6827001273632050e-02
+ <_>
+
+ 0 -1 2302 -1.2764000333845615e-02
+
+ 2.2066700458526611e-01 -2.7769100666046143e-01
+ <_>
+ 199
+ -3.2573320865631104e+00
+
+ <_>
+
+ 0 -1 2303 2.1662000566720963e-02
+
+ -8.9868897199630737e-01 2.9436299204826355e-01
+ <_>
+
+ 0 -1 2304 1.0044500231742859e-01
+
+ -3.7659201025962830e-01 6.0891002416610718e-01
+ <_>
+
+ 0 -1 2305 2.6003999635577202e-02
+
+ -3.8128501176834106e-01 3.9217400550842285e-01
+ <_>
+
+ 0 -1 2306 2.8441000729799271e-02
+
+ -1.8182300031185150e-01 5.8927202224731445e-01
+ <_>
+
+ 0 -1 2307 3.8612000644207001e-02
+
+ -2.2399599850177765e-01 6.3779997825622559e-01
+ <_>
+
+ 0 -1 2308 -4.6594999730587006e-02
+
+ 7.0812201499938965e-01 -1.4666199684143066e-01
+ <_>
+
+ 0 -1 2309 -4.2791999876499176e-02
+
+ 4.7680398821830750e-01 -2.9233199357986450e-01
+ <_>
+
+ 0 -1 2310 3.7960000336170197e-03
+
+ -1.8510299921035767e-01 5.2626699209213257e-01
+ <_>
+
+ 0 -1 2311 4.2348999530076981e-02
+
+ 3.9244998246431351e-02 -8.9197701215744019e-01
+ <_>
+
+ 0 -1 2312 1.9598999992012978e-02
+
+ -2.3358400166034698e-01 4.4146499037742615e-01
+ <_>
+
+ 0 -1 2313 8.7400001939386129e-04
+
+ -4.6063598990440369e-01 1.7689600586891174e-01
+ <_>
+
+ 0 -1 2314 -4.3629999272525311e-03
+
+ 3.3493199944496155e-01 -2.9893401265144348e-01
+ <_>
+
+ 0 -1 2315 1.6973000019788742e-02
+
+ -1.6408699750900269e-01 1.5993679761886597e+00
+ <_>
+
+ 0 -1 2316 3.6063998937606812e-02
+
+ 2.2601699829101562e-01 -5.3186100721359253e-01
+ <_>
+
+ 0 -1 2317 -7.0864997804164886e-02
+
+ 1.5220500528812408e-01 -4.1914600133895874e-01
+ <_>
+
+ 0 -1 2318 -6.3075996935367584e-02
+
+ -1.4874019622802734e+00 1.2953700125217438e-01
+ <_>
+
+ 0 -1 2319 2.9670000076293945e-02
+
+ -1.9145900011062622e-01 9.8184901475906372e-01
+ <_>
+
+ 0 -1 2320 3.7873998284339905e-02
+
+ 1.3459500670433044e-01 -5.6316298246383667e-01
+ <_>
+
+ 0 -1 2321 -3.3289000391960144e-02
+
+ -1.0828030109405518e+00 -1.1504000052809715e-02
+ <_>
+
+ 0 -1 2322 -3.1608998775482178e-02
+
+ -5.9224498271942139e-01 1.3394799828529358e-01
+ <_>
+
+ 0 -1 2323 1.0740000288933516e-03
+
+ -4.9185800552368164e-01 9.4446003437042236e-02
+ <_>
+
+ 0 -1 2324 -7.1556001901626587e-02
+
+ 5.9710198640823364e-01 -3.9553001523017883e-02
+ <_>
+
+ 0 -1 2325 -8.1170000135898590e-02
+
+ -1.1817820072174072e+00 -2.8254000470042229e-02
+ <_>
+
+ 0 -1 2326 4.4860001653432846e-03
+
+ -6.1028099060058594e-01 2.2619099915027618e-01
+ <_>
+
+ 0 -1 2327 -4.2176000773906708e-02
+
+ -1.1435619592666626e+00 -2.9001999646425247e-02
+ <_>
+
+ 0 -1 2328 -6.5640002489089966e-02
+
+ -1.6470279693603516e+00 1.2810300290584564e-01
+ <_>
+
+ 0 -1 2329 1.8188999965786934e-02
+
+ -3.1149399280548096e-01 2.5739601254463196e-01
+ <_>
+
+ 0 -1 2330 -5.1520001143217087e-02
+
+ -6.9206899404525757e-01 1.5270799398422241e-01
+ <_>
+
+ 0 -1 2331 -4.7150999307632446e-02
+
+ -7.1868300437927246e-01 2.6879999786615372e-03
+ <_>
+
+ 0 -1 2332 1.7488999292254448e-02
+
+ 2.2371199727058411e-01 -5.5381798744201660e-01
+ <_>
+
+ 0 -1 2333 -2.5264000520110130e-02
+
+ 1.0319819450378418e+00 -1.7496499419212341e-01
+ <_>
+
+ 0 -1 2334 -4.0745001286268234e-02
+
+ 4.4961598515510559e-01 3.9349000900983810e-02
+ <_>
+
+ 0 -1 2335 -3.7666998803615570e-02
+
+ -8.5475701093673706e-01 -1.2463999912142754e-02
+ <_>
+
+ 0 -1 2336 -1.3411000370979309e-02
+
+ 5.7845598459243774e-01 -1.7467999830842018e-02
+ <_>
+
+ 0 -1 2337 -7.8999997640494257e-05
+
+ -3.7749201059341431e-01 1.3961799442768097e-01
+ <_>
+
+ 0 -1 2338 -1.1415000073611736e-02
+
+ -2.6186600327491760e-01 2.3712499439716339e-01
+ <_>
+
+ 0 -1 2339 3.7200000137090683e-02
+
+ -2.8626000508666039e-02 -1.2945239543914795e+00
+ <_>
+
+ 0 -1 2340 3.4050000831484795e-03
+
+ 2.0531399548053741e-01 -1.8747499585151672e-01
+ <_>
+
+ 0 -1 2341 -2.2483000531792641e-02
+
+ 6.7027199268341064e-01 -1.9594000279903412e-01
+ <_>
+
+ 0 -1 2342 2.3274999111890793e-02
+
+ 1.7405399680137634e-01 -3.2746300101280212e-01
+ <_>
+
+ 0 -1 2343 -1.3917000032961369e-02
+
+ -8.3954298496246338e-01 -6.3760001212358475e-03
+ <_>
+
+ 0 -1 2344 7.5429999269545078e-03
+
+ -3.4194998443126678e-02 5.8998197317123413e-01
+ <_>
+
+ 0 -1 2345 -1.1539000086486340e-02
+
+ 4.2142799496650696e-01 -2.3510499298572540e-01
+ <_>
+
+ 0 -1 2346 5.2501998841762543e-02
+
+ 6.9303996860980988e-02 7.3226499557495117e-01
+ <_>
+
+ 0 -1 2347 5.2715998142957687e-02
+
+ -1.5688100457191467e-01 1.0907289981842041e+00
+ <_>
+
+ 0 -1 2348 -1.1726000346243382e-02
+
+ -7.0934301614761353e-01 1.6828800737857819e-01
+ <_>
+
+ 0 -1 2349 9.5945999026298523e-02
+
+ -1.6192899644374847e-01 1.0072519779205322e+00
+ <_>
+
+ 0 -1 2350 -1.5871999785304070e-02
+
+ 3.9008399844169617e-01 -5.3777001798152924e-02
+ <_>
+
+ 0 -1 2351 3.4818001091480255e-02
+
+ 1.7179999500513077e-02 -9.3941801786422729e-01
+ <_>
+
+ 0 -1 2352 3.4791998565196991e-02
+
+ 5.0462998449802399e-02 5.4465699195861816e-01
+ <_>
+
+ 0 -1 2353 1.6284000128507614e-02
+
+ -2.6981300115585327e-01 4.0365299582481384e-01
+ <_>
+
+ 0 -1 2354 -4.4319000095129013e-02
+
+ 8.4399998188018799e-01 3.2882999628782272e-02
+ <_>
+
+ 0 -1 2355 -5.5689997971057892e-03
+
+ 1.5309399366378784e-01 -3.4959799051284790e-01
+ <_>
+
+ 0 -1 2356 -6.5842002630233765e-02
+
+ -9.2711198329925537e-01 1.6800999641418457e-01
+ <_>
+
+ 0 -1 2357 -7.3337003588676453e-02
+
+ 5.1614499092102051e-01 -2.0236000418663025e-01
+ <_>
+
+ 0 -1 2358 1.6450000926852226e-02
+
+ 1.3950599730014801e-01 -4.9301299452781677e-01
+ <_>
+
+ 0 -1 2359 -9.2630004510283470e-03
+
+ -9.0101999044418335e-01 -1.6116000711917877e-02
+ <_>
+
+ 0 -1 2360 5.9139998629689217e-03
+
+ 1.9858199357986450e-01 -1.6731299459934235e-01
+ <_>
+
+ 0 -1 2361 -8.4699998842552304e-04
+
+ 9.4005003571510315e-02 -4.1570898890495300e-01
+ <_>
+
+ 0 -1 2362 2.0532900094985962e-01
+
+ -6.0022000223398209e-02 7.0993602275848389e-01
+ <_>
+
+ 0 -1 2363 -1.6883000731468201e-02
+
+ 2.4392199516296387e-01 -3.0551800131797791e-01
+ <_>
+
+ 0 -1 2364 -1.9111000001430511e-02
+
+ 6.1229902505874634e-01 2.4252999573945999e-02
+ <_>
+
+ 0 -1 2365 -2.5962999090552330e-02
+
+ 9.0764999389648438e-01 -1.6722099483013153e-01
+ <_>
+
+ 0 -1 2366 -2.1762000396847725e-02
+
+ -3.1384700536727905e-01 2.0134599506855011e-01
+ <_>
+
+ 0 -1 2367 -2.4119999259710312e-02
+
+ -6.6588401794433594e-01 7.4559999629855156e-03
+ <_>
+
+ 0 -1 2368 4.7129999846220016e-02
+
+ 5.9533998370170593e-02 8.7804502248764038e-01
+ <_>
+
+ 0 -1 2369 -4.5984998345375061e-02
+
+ 8.0067998170852661e-01 -1.7252300679683685e-01
+ <_>
+
+ 0 -1 2370 2.6507999747991562e-02
+
+ 1.8774099647998810e-01 -6.0850602388381958e-01
+ <_>
+
+ 0 -1 2371 -4.8615001142024994e-02
+
+ 5.8644098043441772e-01 -1.9427700340747833e-01
+ <_>
+
+ 0 -1 2372 -1.8562000244855881e-02
+
+ -2.5587901473045349e-01 1.6326199471950531e-01
+ <_>
+
+ 0 -1 2373 1.2678000144660473e-02
+
+ -1.4228000305593014e-02 -7.6738101243972778e-01
+ <_>
+
+ 0 -1 2374 -1.1919999960809946e-03
+
+ 2.0495000481605530e-01 -1.1404299736022949e-01
+ <_>
+
+ 0 -1 2375 -4.9088999629020691e-02
+
+ -1.0740849971771240e+00 -3.8940999656915665e-02
+ <_>
+
+ 0 -1 2376 -1.7436999827623367e-02
+
+ -5.7973802089691162e-01 1.8584500253200531e-01
+ <_>
+
+ 0 -1 2377 -1.4770000241696835e-02
+
+ -6.6150301694869995e-01 5.3119999356567860e-03
+ <_>
+
+ 0 -1 2378 -2.2905200719833374e-01
+
+ -4.8305100202560425e-01 1.2326399981975555e-01
+ <_>
+
+ 0 -1 2379 -1.2707099318504333e-01
+
+ 5.7452601194381714e-01 -1.9420400261878967e-01
+ <_>
+
+ 0 -1 2380 1.0339000262320042e-02
+
+ -5.4641999304294586e-02 2.4501800537109375e-01
+ <_>
+
+ 0 -1 2381 6.9010001607239246e-03
+
+ 1.2180600315332413e-01 -3.8797399401664734e-01
+ <_>
+
+ 0 -1 2382 2.9025399684906006e-01
+
+ 1.0966199636459351e-01 -30.
+ <_>
+
+ 0 -1 2383 -2.3804999887943268e-01
+
+ -1.7352679967880249e+00 -6.3809998333454132e-02
+ <_>
+
+ 0 -1 2384 6.2481001019477844e-02
+
+ 1.3523000478744507e-01 -7.0301097631454468e-01
+ <_>
+
+ 0 -1 2385 4.7109997831285000e-03
+
+ -4.6984100341796875e-01 6.0341998934745789e-02
+ <_>
+
+ 0 -1 2386 -2.7815999463200569e-02
+
+ 6.9807600975036621e-01 1.3719999697059393e-03
+ <_>
+
+ 0 -1 2387 -1.7020000144839287e-02
+
+ 1.6870440244674683e+00 -1.4314800500869751e-01
+ <_>
+
+ 0 -1 2388 -4.9754999577999115e-02
+
+ 7.9497700929641724e-01 7.7199999941512942e-04
+ <_>
+
+ 0 -1 2389 -7.4732996523380280e-02
+
+ -1.0132360458374023e+00 -1.9388999789953232e-02
+ <_>
+
+ 0 -1 2390 3.2009001821279526e-02
+
+ 1.4412100613117218e-01 -4.2139101028442383e-01
+ <_>
+
+ 0 -1 2391 -9.4463996589183807e-02
+
+ 5.0682598352432251e-01 -2.0478899776935577e-01
+ <_>
+
+ 0 -1 2392 -1.5426999889314175e-02
+
+ -1.5811300277709961e-01 1.7806899547576904e-01
+ <_>
+
+ 0 -1 2393 -4.0540001355111599e-03
+
+ -5.4366701841354370e-01 3.1235000118613243e-02
+ <_>
+
+ 0 -1 2394 3.0080000869929790e-03
+
+ -1.7376799881458282e-01 3.0441701412200928e-01
+ <_>
+
+ 0 -1 2395 -1.0091999545693398e-02
+
+ 2.5103801488876343e-01 -2.6224100589752197e-01
+ <_>
+
+ 0 -1 2396 -3.8818001747131348e-02
+
+ 9.3226701021194458e-01 7.2659999132156372e-02
+ <_>
+
+ 0 -1 2397 3.4651998430490494e-02
+
+ -3.3934999257326126e-02 -8.5707902908325195e-01
+ <_>
+
+ 0 -1 2398 -4.6729999594390392e-03
+
+ 3.4969300031661987e-01 -4.8517998307943344e-02
+ <_>
+
+ 0 -1 2399 6.8499997723847628e-04
+
+ 6.6573001444339752e-02 -4.4973799586296082e-01
+ <_>
+
+ 0 -1 2400 3.5317000001668930e-02
+
+ 1.4275799691677094e-01 -4.6726399660110474e-01
+ <_>
+
+ 0 -1 2401 -2.3569999262690544e-02
+
+ -1.0286079645156860e+00 -4.5288000255823135e-02
+ <_>
+
+ 0 -1 2402 -1.9109999993816018e-03
+
+ -1.9652199745178223e-01 2.8661000728607178e-01
+ <_>
+
+ 0 -1 2403 -1.6659000888466835e-02
+
+ -7.7532202005386353e-01 -8.3280000835657120e-03
+ <_>
+
+ 0 -1 2404 6.6062200069427490e-01
+
+ 1.3232499361038208e-01 -3.5266680717468262e+00
+ <_>
+
+ 0 -1 2405 1.0970599949359894e-01
+
+ -1.5547199547290802e-01 1.4674140214920044e+00
+ <_>
+
+ 0 -1 2406 1.3500999659299850e-02
+
+ 1.5233400464057922e-01 -1.3020930290222168e+00
+ <_>
+
+ 0 -1 2407 -2.2871999070048332e-02
+
+ -7.1325999498367310e-01 -8.7040001526474953e-03
+ <_>
+
+ 0 -1 2408 -8.1821002066135406e-02
+
+ 1.1127580404281616e+00 8.3219997584819794e-02
+ <_>
+
+ 0 -1 2409 -5.2728001028299332e-02
+
+ 9.3165099620819092e-01 -1.7103999853134155e-01
+ <_>
+
+ 0 -1 2410 -2.5242000818252563e-02
+
+ -1.9733799993991852e-01 2.5359401106834412e-01
+ <_>
+
+ 0 -1 2411 -4.3818999081850052e-02
+
+ 4.1815200448036194e-01 -2.4585500359535217e-01
+ <_>
+
+ 0 -1 2412 -1.8188999965786934e-02
+
+ -5.1743197441101074e-01 2.0174199342727661e-01
+ <_>
+
+ 0 -1 2413 2.3466000333428383e-02
+
+ -4.3071001768112183e-02 -1.0636579990386963e+00
+ <_>
+
+ 0 -1 2414 3.4216001629829407e-02
+
+ 5.3780999034643173e-02 4.9707201123237610e-01
+ <_>
+
+ 0 -1 2415 2.5692999362945557e-02
+
+ -2.3800100386142731e-01 4.1651499271392822e-01
+ <_>
+
+ 0 -1 2416 -2.6565000414848328e-02
+
+ -8.8574802875518799e-01 1.3365900516510010e-01
+ <_>
+
+ 0 -1 2417 6.0942001640796661e-02
+
+ -2.0669700205326080e-01 5.8309000730514526e-01
+ <_>
+
+ 0 -1 2418 1.4474500715732574e-01
+
+ 1.3282300531864166e-01 -3.1449348926544189e+00
+ <_>
+
+ 0 -1 2419 5.3410999476909637e-02
+
+ -1.7325200140476227e-01 6.9190698862075806e-01
+ <_>
+
+ 0 -1 2420 1.1408000253140926e-02
+
+ 5.4822001606225967e-02 3.0240398645401001e-01
+ <_>
+
+ 0 -1 2421 -2.3179999552667141e-03
+
+ 1.5820899605751038e-01 -3.1973201036453247e-01
+ <_>
+
+ 0 -1 2422 -2.9695000499486923e-02
+
+ 7.1274799108505249e-01 5.8136001229286194e-02
+ <_>
+
+ 0 -1 2423 2.7249999344348907e-02
+
+ -1.5754100680351257e-01 9.2143797874450684e-01
+ <_>
+
+ 0 -1 2424 -3.6200000904500484e-03
+
+ -3.4548398852348328e-01 2.0220999419689178e-01
+ <_>
+
+ 0 -1 2425 -1.2578999623656273e-02
+
+ -5.5650299787521362e-01 2.0388999953866005e-02
+ <_>
+
+ 0 -1 2426 -8.8849000632762909e-02
+
+ -3.6100010871887207e+00 1.3164199888706207e-01
+ <_>
+
+ 0 -1 2427 -1.9256999716162682e-02
+
+ 5.1908999681472778e-01 -1.9284300506114960e-01
+ <_>
+
+ 0 -1 2428 -1.6666999086737633e-02
+
+ -8.7499998509883881e-02 1.5812499821186066e-01
+ <_>
+
+ 0 -1 2429 1.2931999750435352e-02
+
+ 2.7405999600887299e-02 -5.5123901367187500e-01
+ <_>
+
+ 0 -1 2430 -1.3431999832391739e-02
+
+ 2.3457799851894379e-01 -4.3235000222921371e-02
+ <_>
+
+ 0 -1 2431 1.8810000270605087e-02
+
+ -3.9680998772382736e-02 -9.4373297691345215e-01
+ <_>
+
+ 0 -1 2432 -6.4349998719990253e-03
+
+ 4.5703700184822083e-01 -4.0520001202821732e-03
+ <_>
+
+ 0 -1 2433 -2.4249000474810600e-02
+
+ -7.6248002052307129e-01 -1.9857000559568405e-02
+ <_>
+
+ 0 -1 2434 -2.9667999595403671e-02
+
+ -3.7412509918212891e+00 1.1250600218772888e-01
+ <_>
+
+ 0 -1 2435 5.1150000654160976e-03
+
+ -6.3781797885894775e-01 1.1223999783396721e-02
+ <_>
+
+ 0 -1 2436 -5.7819997891783714e-03
+
+ 1.9374400377273560e-01 -8.2042001187801361e-02
+ <_>
+
+ 0 -1 2437 1.6606999561190605e-02
+
+ -1.6192099452018738e-01 1.1334990262985229e+00
+ <_>
+
+ 0 -1 2438 3.8228001445531845e-02
+
+ 2.1105000749230385e-02 7.6264202594757080e-01
+ <_>
+
+ 0 -1 2439 -5.7094000279903412e-02
+
+ -1.6974929571151733e+00 -5.9762001037597656e-02
+ <_>
+
+ 0 -1 2440 -5.3883001208305359e-02
+
+ 1.1850190162658691e+00 9.0966999530792236e-02
+ <_>
+
+ 0 -1 2441 -2.6110000908374786e-03
+
+ -4.0941199660301208e-01 8.3820998668670654e-02
+ <_>
+
+ 0 -1 2442 2.9714399576187134e-01
+
+ 1.5529899299144745e-01 -1.0995409488677979e+00
+ <_>
+
+ 0 -1 2443 -8.9063003659248352e-02
+
+ 4.8947200179100037e-01 -2.0041200518608093e-01
+ <_>
+
+ 0 -1 2444 -5.6193001568317413e-02
+
+ -2.4581399559974670e-01 1.4365500211715698e-01
+ <_>
+
+ 0 -1 2445 3.7004999816417694e-02
+
+ -4.8168998211622238e-02 -1.2310709953308105e+00
+ <_>
+
+ 0 -1 2446 -8.4840003401041031e-03
+
+ 4.3372601270675659e-01 1.3779999688267708e-02
+ <_>
+
+ 0 -1 2447 -2.4379999376833439e-03
+
+ 1.8949699401855469e-01 -3.2294198870658875e-01
+ <_>
+
+ 0 -1 2448 -7.1639999747276306e-02
+
+ -4.3979001045227051e-01 2.2730199992656708e-01
+ <_>
+
+ 0 -1 2449 5.2260002121329308e-03
+
+ -2.0548400282859802e-01 5.0933301448822021e-01
+ <_>
+
+ 0 -1 2450 -6.1360001564025879e-03
+
+ 3.1157198548316956e-01 7.0680998265743256e-02
+ <_>
+
+ 0 -1 2451 1.5595000237226486e-02
+
+ -3.0934798717498779e-01 1.5627700090408325e-01
+ <_>
+
+ 0 -1 2452 2.5995999574661255e-02
+
+ 1.3821600377559662e-01 -1.7616599798202515e-01
+ <_>
+
+ 0 -1 2453 -1.2085000053048134e-02
+
+ -5.1070201396942139e-01 5.8440998196601868e-02
+ <_>
+
+ 0 -1 2454 -6.7836001515388489e-02
+
+ 4.7757101058959961e-01 -7.1446001529693604e-02
+ <_>
+
+ 0 -1 2455 -1.4715000055730343e-02
+
+ 4.5238900184631348e-01 -1.9861400127410889e-01
+ <_>
+
+ 0 -1 2456 2.5118999183177948e-02
+
+ 1.2954899668693542e-01 -8.6266398429870605e-01
+ <_>
+
+ 0 -1 2457 1.8826000392436981e-02
+
+ -4.1570000350475311e-02 -1.1354700326919556e+00
+ <_>
+
+ 0 -1 2458 -2.1263999864459038e-02
+
+ -3.4738001227378845e-01 1.5779499709606171e-01
+ <_>
+
+ 0 -1 2459 9.4609996303915977e-03
+
+ 4.8639997839927673e-03 -6.1654800176620483e-01
+ <_>
+
+ 0 -1 2460 2.2957700490951538e-01
+
+ 8.1372998654842377e-02 6.9841402769088745e-01
+ <_>
+
+ 0 -1 2461 -3.8061998784542084e-02
+
+ 1.1616369485855103e+00 -1.4976699650287628e-01
+ <_>
+
+ 0 -1 2462 -1.3484999537467957e-02
+
+ -3.2036399841308594e-01 1.7365099489688873e-01
+ <_>
+
+ 0 -1 2463 3.6238998174667358e-02
+
+ -1.8158499896526337e-01 6.1956697702407837e-01
+ <_>
+
+ 0 -1 2464 6.7210001870989799e-03
+
+ 7.9600000753998756e-04 4.2441400885581970e-01
+ <_>
+
+ 0 -1 2465 9.6525996923446655e-02
+
+ -1.4696800708770752e-01 1.2525680065155029e+00
+ <_>
+
+ 0 -1 2466 -3.5656999796628952e-02
+
+ -3.9781698584556580e-01 1.4191399514675140e-01
+ <_>
+
+ 0 -1 2467 1.0772000066936016e-02
+
+ -1.8194000422954559e-01 5.9762197732925415e-01
+ <_>
+
+ 0 -1 2468 7.9279996454715729e-02
+
+ 1.4642499387264252e-01 -7.8836899995803833e-01
+ <_>
+
+ 0 -1 2469 3.2841000705957413e-02
+
+ -6.2408000230789185e-02 -1.4227490425109863e+00
+ <_>
+
+ 0 -1 2470 -2.7781000360846519e-02
+
+ 3.4033098816871643e-01 3.0670000240206718e-02
+ <_>
+
+ 0 -1 2471 -4.0339999832212925e-03
+
+ 3.1084701418876648e-01 -2.2595700621604919e-01
+ <_>
+
+ 0 -1 2472 7.4260002002120018e-03
+
+ -3.8936998695135117e-02 3.1702101230621338e-01
+ <_>
+
+ 0 -1 2473 1.1213999986648560e-01
+
+ -1.7578299343585968e-01 6.5056598186492920e-01
+ <_>
+
+ 0 -1 2474 -1.1878100037574768e-01
+
+ -1.0092990398406982e+00 1.1069700121879578e-01
+ <_>
+
+ 0 -1 2475 -4.1584998369216919e-02
+
+ -5.3806400299072266e-01 1.9905000925064087e-02
+ <_>
+
+ 0 -1 2476 -2.7966000139713287e-02
+
+ 4.8143199086189270e-01 3.3590998500585556e-02
+ <_>
+
+ 0 -1 2477 -1.2506400048732758e-01
+
+ 2.6352199912071228e-01 -2.5737899541854858e-01
+ <_>
+
+ 0 -1 2478 2.3666900396347046e-01
+
+ 3.6508001387119293e-02 9.0655601024627686e-01
+ <_>
+
+ 0 -1 2479 -2.9475999996066093e-02
+
+ -6.0048800706863403e-01 9.5880003646016121e-03
+ <_>
+
+ 0 -1 2480 3.7792999297380447e-02
+
+ 1.5506200492382050e-01 -9.5733499526977539e-01
+ <_>
+
+ 0 -1 2481 7.2044000029563904e-02
+
+ -1.4525899291038513e-01 1.3676730394363403e+00
+ <_>
+
+ 0 -1 2482 9.7759999334812164e-03
+
+ 1.2915999628603458e-02 2.1640899777412415e-01
+ <_>
+
+ 0 -1 2483 5.2154000848531723e-02
+
+ -1.6359999775886536e-02 -8.8356298208236694e-01
+ <_>
+
+ 0 -1 2484 -4.3790999799966812e-02
+
+ 3.5829600691795349e-01 6.5131001174449921e-02
+ <_>
+
+ 0 -1 2485 -3.8378998637199402e-02
+
+ 1.1961040496826172e+00 -1.4971500635147095e-01
+ <_>
+
+ 0 -1 2486 -9.8838999867439270e-02
+
+ -6.1834001541137695e-01 1.2786200642585754e-01
+ <_>
+
+ 0 -1 2487 -1.2190700322389603e-01
+
+ -1.8276120424270630e+00 -6.4862996339797974e-02
+ <_>
+
+ 0 -1 2488 -1.1981700360774994e-01
+
+ -30. 1.1323300004005432e-01
+ <_>
+
+ 0 -1 2489 3.0910000205039978e-02
+
+ -2.3934000730514526e-01 3.6332899332046509e-01
+ <_>
+
+ 0 -1 2490 1.0800999589264393e-02
+
+ -3.5140000283718109e-02 2.7707898616790771e-01
+ <_>
+
+ 0 -1 2491 5.6844998151063919e-02
+
+ -1.5524299442768097e-01 1.0802700519561768e+00
+ <_>
+
+ 0 -1 2492 1.0280000278726220e-03
+
+ -6.1202999204397202e-02 2.0508000254631042e-01
+ <_>
+
+ 0 -1 2493 -2.8273999691009521e-02
+
+ -6.4778000116348267e-01 2.3917000740766525e-02
+ <_>
+
+ 0 -1 2494 -1.6013599932193756e-01
+
+ 1.0892050266265869e+00 5.8389000594615936e-02
+ <_>
+
+ 0 -1 2495 4.9629998393356800e-03
+
+ -2.5806298851966858e-01 2.0834599435329437e-01
+ <_>
+
+ 0 -1 2496 4.6937000006437302e-02
+
+ 1.3886299729347229e-01 -1.5662620067596436e+00
+ <_>
+
+ 0 -1 2497 2.4286000058054924e-02
+
+ -2.0728300511837006e-01 5.2430999279022217e-01
+ <_>
+
+ 0 -1 2498 7.0202000439167023e-02
+
+ 1.4796899259090424e-01 -1.3095090389251709e+00
+ <_>
+
+ 0 -1 2499 9.8120002076029778e-03
+
+ 2.7906000614166260e-02 -5.0864601135253906e-01
+ <_>
+
+ 0 -1 2500 -5.6200999766588211e-02
+
+ 1.2618130445480347e+00 6.3801996409893036e-02
+ <_>
+
+ 0 -1 2501 1.0982800275087357e-01
+
+ -1.2850099802017212e-01 3.0776169300079346e+00
+ <_>
+ 211
+ -3.3703000545501709e+00
+
+ <_>
+
+ 0 -1 2502 2.0910000428557396e-02
+
+ -6.8559402227401733e-01 3.8984298706054688e-01
+ <_>
+
+ 0 -1 2503 3.5032000392675400e-02
+
+ -4.7724398970603943e-01 4.5027199387550354e-01
+ <_>
+
+ 0 -1 2504 3.9799001067876816e-02
+
+ -4.7011101245880127e-01 4.2702499032020569e-01
+ <_>
+
+ 0 -1 2505 -4.8409998416900635e-03
+
+ 2.5614300370216370e-01 -6.6556298732757568e-01
+ <_>
+
+ 0 -1 2506 2.3439999204128981e-03
+
+ -4.8083499073982239e-01 2.8013798594474792e-01
+ <_>
+
+ 0 -1 2507 2.5312999263405800e-02
+
+ -2.3948200047016144e-01 4.4191798567771912e-01
+ <_>
+
+ 0 -1 2508 -3.2193001359701157e-02
+
+ 7.6086699962615967e-01 -2.5059100985527039e-01
+ <_>
+
+ 0 -1 2509 7.5409002602100372e-02
+
+ -3.4974598884582520e-01 3.4380298852920532e-01
+ <_>
+
+ 0 -1 2510 -1.8469000235199928e-02
+
+ -7.9085600376129150e-01 3.4788001328706741e-02
+ <_>
+
+ 0 -1 2511 -1.2802000157535076e-02
+
+ 4.7107800841331482e-01 -6.0006000101566315e-02
+ <_>
+
+ 0 -1 2512 -2.6598000898957253e-02
+
+ 6.7116099596023560e-01 -2.4257500469684601e-01
+ <_>
+
+ 0 -1 2513 2.1988999098539352e-02
+
+ 2.4717499315738678e-01 -4.8301699757575989e-01
+ <_>
+
+ 0 -1 2514 1.4654099941253662e-01
+
+ -2.1504099667072296e-01 7.2055900096893311e-01
+ <_>
+
+ 0 -1 2515 3.5310001112520695e-03
+
+ 2.7930998802185059e-01 -3.4339898824691772e-01
+ <_>
+
+ 0 -1 2516 9.4010001048445702e-03
+
+ 5.5861998349428177e-02 -8.2143598794937134e-01
+ <_>
+
+ 0 -1 2517 -8.6390003561973572e-03
+
+ -9.9620598554611206e-01 1.8874999880790710e-01
+ <_>
+
+ 0 -1 2518 -3.9193000644445419e-02
+
+ -1.1945559978485107e+00 -2.9198000207543373e-02
+ <_>
+
+ 0 -1 2519 2.4855000898241997e-02
+
+ 1.4987599849700928e-01 -5.4137802124023438e-01
+ <_>
+
+ 0 -1 2520 -3.4995000809431076e-02
+
+ -1.4210180044174194e+00 -4.2314000427722931e-02
+ <_>
+
+ 0 -1 2521 -1.8378999084234238e-02
+
+ -2.8242599964141846e-01 1.5581800043582916e-01
+ <_>
+
+ 0 -1 2522 -1.3592000119388103e-02
+
+ 4.7317099571228027e-01 -2.1937200427055359e-01
+ <_>
+
+ 0 -1 2523 6.2629999592900276e-03
+
+ -5.9714000672101974e-02 6.0625898838043213e-01
+ <_>
+
+ 0 -1 2524 -1.8478000536561012e-02
+
+ -8.5647201538085938e-01 -1.3783999718725681e-02
+ <_>
+
+ 0 -1 2525 1.4236000366508961e-02
+
+ 1.6654799878597260e-01 -2.7713999152183533e-01
+ <_>
+
+ 0 -1 2526 -3.2547000795602798e-02
+
+ -1.1728240251541138e+00 -4.0185000747442245e-02
+ <_>
+
+ 0 -1 2527 -2.6410000864416361e-03
+
+ 2.6514300704002380e-01 -5.6343000382184982e-02
+ <_>
+
+ 0 -1 2528 -8.7799999164417386e-04
+
+ 3.6556001752614975e-02 -5.5075198411941528e-01
+ <_>
+
+ 0 -1 2529 4.7371998429298401e-02
+
+ -4.2614001780748367e-02 4.8194900155067444e-01
+ <_>
+
+ 0 -1 2530 -7.0790001191198826e-03
+
+ 2.8698998689651489e-01 -3.2923001050949097e-01
+ <_>
+
+ 0 -1 2531 -4.3145999312400818e-02
+
+ -1.4065419435501099e+00 1.2836399674415588e-01
+ <_>
+
+ 0 -1 2532 2.0592000335454941e-02
+
+ -2.1435299515724182e-01 5.3981798887252808e-01
+ <_>
+
+ 0 -1 2533 -2.2367000579833984e-02
+
+ 3.3718299865722656e-01 4.5212000608444214e-02
+ <_>
+
+ 0 -1 2534 5.0039999186992645e-02
+
+ -2.5121700763702393e-01 4.1750499606132507e-01
+ <_>
+
+ 0 -1 2535 6.1794999986886978e-02
+
+ 4.0084999054670334e-02 6.8779802322387695e-01
+ <_>
+
+ 0 -1 2536 -4.1861999779939651e-02
+
+ 5.3027397394180298e-01 -2.2901999950408936e-01
+ <_>
+
+ 0 -1 2537 -3.1959998887032270e-03
+
+ 2.5161498785018921e-01 -2.1514600515365601e-01
+ <_>
+
+ 0 -1 2538 2.4255000054836273e-02
+
+ 7.2320001199841499e-03 -7.2519099712371826e-01
+ <_>
+
+ 0 -1 2539 -1.7303999513387680e-02
+
+ -4.9958199262619019e-01 1.8394500017166138e-01
+ <_>
+
+ 0 -1 2540 -4.1470001451671124e-03
+
+ 8.5211999714374542e-02 -4.6364700794219971e-01
+ <_>
+
+ 0 -1 2541 -1.4369999989867210e-02
+
+ -5.2258902788162231e-01 2.3892599344253540e-01
+ <_>
+
+ 0 -1 2542 -9.0399999171495438e-03
+
+ -6.3250398635864258e-01 3.2551001757383347e-02
+ <_>
+
+ 0 -1 2543 -1.2373100221157074e-01
+
+ 1.2856210470199585e+00 7.6545000076293945e-02
+ <_>
+
+ 0 -1 2544 -8.2221999764442444e-02
+
+ 8.3208197355270386e-01 -1.8590599298477173e-01
+ <_>
+
+ 0 -1 2545 6.5659001469612122e-02
+
+ 1.1298800259828568e-01 -30.
+ <_>
+
+ 0 -1 2546 -3.1582999974489212e-02
+
+ -1.3485900163650513e+00 -4.7097001224756241e-02
+ <_>
+
+ 0 -1 2547 -7.9636000096797943e-02
+
+ -1.3533639907836914e+00 1.5668800473213196e-01
+ <_>
+
+ 0 -1 2548 -1.8880000337958336e-02
+
+ 4.0300300717353821e-01 -2.5148901343345642e-01
+ <_>
+
+ 0 -1 2549 -5.0149997696280479e-03
+
+ -2.6287099719047546e-01 1.8582500517368317e-01
+ <_>
+
+ 0 -1 2550 -1.2218000367283821e-02
+
+ 5.8692401647567749e-01 -1.9427700340747833e-01
+ <_>
+
+ 0 -1 2551 1.2710000155493617e-03
+
+ -1.6688999533653259e-01 2.3006899654865265e-01
+ <_>
+
+ 0 -1 2552 2.9743999242782593e-02
+
+ 1.2520000338554382e-02 -6.6723597049713135e-01
+ <_>
+
+ 0 -1 2553 2.8175000101327896e-02
+
+ -1.7060000449419022e-02 6.4579397439956665e-01
+ <_>
+
+ 0 -1 2554 3.0345000326633453e-02
+
+ -2.4178700149059296e-01 3.4878900647163391e-01
+ <_>
+
+ 0 -1 2555 -1.7325999215245247e-02
+
+ -5.3599399328231812e-01 2.0995999872684479e-01
+ <_>
+
+ 0 -1 2556 -8.4178000688552856e-02
+
+ 7.5093299150466919e-01 -1.7593200504779816e-01
+ <_>
+
+ 0 -1 2557 7.4950000271201134e-03
+
+ -1.6188099980354309e-01 3.0657500028610229e-01
+ <_>
+
+ 0 -1 2558 5.6494999676942825e-02
+
+ -1.7318800091743469e-01 1.0016150474548340e+00
+ <_>
+
+ 0 -1 2559 -5.2939997985959053e-03
+
+ 2.3417599499225616e-01 -6.5347000956535339e-02
+ <_>
+
+ 0 -1 2560 -1.4945000410079956e-02
+
+ 2.5018900632858276e-01 -3.0591198801994324e-01
+ <_>
+
+ 0 -1 2561 5.4919000715017319e-02
+
+ 1.3121999800205231e-01 -9.3765097856521606e-01
+ <_>
+
+ 0 -1 2562 -1.9721999764442444e-02
+
+ -8.3978497982025146e-01 -2.3473000153899193e-02
+ <_>
+
+ 0 -1 2563 -6.7158997058868408e-02
+
+ 2.3586840629577637e+00 8.2970999181270599e-02
+ <_>
+
+ 0 -1 2564 -1.4325999654829502e-02
+
+ 1.8814499676227570e-01 -3.1221601366996765e-01
+ <_>
+
+ 0 -1 2565 2.9841000214219093e-02
+
+ 1.4825099706649780e-01 -8.4681701660156250e-01
+ <_>
+
+ 0 -1 2566 5.1883000880479813e-02
+
+ -4.3731000274419785e-02 -1.3366169929504395e+00
+ <_>
+
+ 0 -1 2567 4.1127000004053116e-02
+
+ 1.7660099267959595e-01 -6.0904097557067871e-01
+ <_>
+
+ 0 -1 2568 -1.2865099310874939e-01
+
+ -9.8701000213623047e-01 -3.7785001099109650e-02
+ <_>
+
+ 0 -1 2569 2.4170000106096268e-03
+
+ -1.6119599342346191e-01 3.2675701379776001e-01
+ <_>
+
+ 0 -1 2570 7.7030002139508724e-03
+
+ -2.3841500282287598e-01 2.9319399595260620e-01
+ <_>
+
+ 0 -1 2571 4.5520000159740448e-02
+
+ 1.4424599707126617e-01 -1.5010160207748413e+00
+ <_>
+
+ 0 -1 2572 -7.8700996935367584e-02
+
+ -1.0394560098648071e+00 -4.5375999063253403e-02
+ <_>
+
+ 0 -1 2573 7.8619997948408127e-03
+
+ 1.9633600115776062e-01 -1.4472399652004242e-01
+ <_>
+
+ 0 -1 2574 -1.3458999805152416e-02
+
+ -9.0634697675704956e-01 -3.8049001246690750e-02
+ <_>
+
+ 0 -1 2575 2.8827000409364700e-02
+
+ -2.9473999515175819e-02 6.0058397054672241e-01
+ <_>
+
+ 0 -1 2576 -2.7365999296307564e-02
+
+ -9.9804002046585083e-01 -3.8653001189231873e-02
+ <_>
+
+ 0 -1 2577 -7.2917997837066650e-02
+
+ 7.3361498117446899e-01 5.7440001517534256e-02
+ <_>
+
+ 0 -1 2578 -1.3988999649882317e-02
+
+ 2.7892601490020752e-01 -2.6516300439834595e-01
+ <_>
+
+ 0 -1 2579 4.3242998421192169e-02
+
+ 4.7760000452399254e-03 3.5925900936126709e-01
+ <_>
+
+ 0 -1 2580 2.9533000662922859e-02
+
+ -2.0083999633789062e-01 5.1202899217605591e-01
+ <_>
+
+ 0 -1 2581 -3.1897000968456268e-02
+
+ 6.4721697568893433e-01 -1.3760000001639128e-03
+ <_>
+
+ 0 -1 2582 3.7868998944759369e-02
+
+ -1.8363800644874573e-01 6.1343097686767578e-01
+ <_>
+
+ 0 -1 2583 -2.2417999804019928e-02
+
+ -2.9187899827957153e-01 1.8194800615310669e-01
+ <_>
+
+ 0 -1 2584 5.8958999812602997e-02
+
+ -6.6451996564865112e-02 -1.9290030002593994e+00
+ <_>
+
+ 0 -1 2585 3.1222999095916748e-02
+
+ -1.2732000090181828e-02 6.1560797691345215e-01
+ <_>
+
+ 0 -1 2586 3.7484999746084213e-02
+
+ -2.0856900513172150e-01 4.4363999366760254e-01
+ <_>
+
+ 0 -1 2587 -2.0966000854969025e-02
+
+ -3.5712799429893494e-01 2.4252200126647949e-01
+ <_>
+
+ 0 -1 2588 -2.5477999821305275e-02
+
+ 1.0846560001373291e+00 -1.5054400265216827e-01
+ <_>
+
+ 0 -1 2589 -7.2570000775158405e-03
+
+ 2.1302600204944611e-01 -1.8308199942111969e-01
+ <_>
+
+ 0 -1 2590 -5.0983000546693802e-02
+
+ 5.1736801862716675e-01 -1.8833099305629730e-01
+ <_>
+
+ 0 -1 2591 -2.0640000700950623e-02
+
+ -4.4030201435089111e-01 2.2745999693870544e-01
+ <_>
+
+ 0 -1 2592 1.0672999545931816e-02
+
+ 3.5059999674558640e-02 -5.1665002107620239e-01
+ <_>
+
+ 0 -1 2593 3.1895998865365982e-02
+
+ 1.3228000141680241e-02 3.4915199875831604e-01
+ <_>
+
+ 0 -1 2594 -2.3824999108910561e-02
+
+ 3.4118801355361938e-01 -2.1510200202465057e-01
+ <_>
+
+ 0 -1 2595 -6.0680001042783260e-03
+
+ 3.2937398552894592e-01 -2.8523799777030945e-01
+ <_>
+
+ 0 -1 2596 2.3881999775767326e-02
+
+ -2.5333800911903381e-01 2.6296100020408630e-01
+ <_>
+
+ 0 -1 2597 2.7966000139713287e-02
+
+ 1.4049099385738373e-01 -4.9887099862098694e-01
+ <_>
+
+ 0 -1 2598 1.4603000134229660e-02
+
+ -1.5395999886095524e-02 -7.6958000659942627e-01
+ <_>
+
+ 0 -1 2599 1.0872399806976318e-01
+
+ 1.9069600105285645e-01 -3.2393100857734680e-01
+ <_>
+
+ 0 -1 2600 -1.4038000255823135e-02
+
+ 3.4924700856208801e-01 -2.2358700633049011e-01
+ <_>
+
+ 0 -1 2601 4.0440000593662262e-03
+
+ -3.8329001516103745e-02 5.1177299022674561e-01
+ <_>
+
+ 0 -1 2602 -4.9769999459385872e-03
+
+ -4.2888298630714417e-01 4.9173999577760696e-02
+ <_>
+
+ 0 -1 2603 -8.5183002054691315e-02
+
+ 6.6624599695205688e-01 7.8079998493194580e-03
+ <_>
+
+ 0 -1 2604 2.1559998858720064e-03
+
+ -4.9135199189186096e-01 6.9555997848510742e-02
+ <_>
+
+ 0 -1 2605 3.6384499073028564e-01
+
+ 1.2997099757194519e-01 -1.8949509859085083e+00
+ <_>
+
+ 0 -1 2606 2.2082500159740448e-01
+
+ -5.7211998850107193e-02 -1.4281120300292969e+00
+ <_>
+
+ 0 -1 2607 -1.6140000894665718e-02
+
+ -5.7589399814605713e-01 1.8062500655651093e-01
+ <_>
+
+ 0 -1 2608 -4.8330001533031464e-02
+
+ 9.7308498620986938e-01 -1.6513000428676605e-01
+ <_>
+
+ 0 -1 2609 1.7529999837279320e-02
+
+ 1.7932699620723724e-01 -2.7948901057243347e-01
+ <_>
+
+ 0 -1 2610 -3.4309998154640198e-02
+
+ -8.1072497367858887e-01 -1.6596000641584396e-02
+ <_>
+
+ 0 -1 2611 -4.5830002054572105e-03
+
+ 2.7908998727798462e-01 -7.4519999325275421e-03
+ <_>
+
+ 0 -1 2612 1.2896400690078735e-01
+
+ -1.3508500158786774e-01 2.5411539077758789e+00
+ <_>
+
+ 0 -1 2613 3.0361000448465347e-02
+
+ -6.8419001996517181e-02 2.8734099864959717e-01
+ <_>
+
+ 0 -1 2614 4.4086001813411713e-02
+
+ -1.8135899305343628e-01 6.5413200855255127e-01
+ <_>
+
+ 0 -1 2615 3.0159999150782824e-03
+
+ -1.5690499544143677e-01 2.6963800191879272e-01
+ <_>
+
+ 0 -1 2616 -2.6336999610066414e-02
+
+ 2.9175600409507751e-01 -2.5274100899696350e-01
+ <_>
+
+ 0 -1 2617 -2.7866000309586525e-02
+
+ 4.4387501478195190e-01 5.5038001388311386e-02
+ <_>
+
+ 0 -1 2618 1.1725000105798244e-02
+
+ -1.9346499443054199e-01 4.6656700968742371e-01
+ <_>
+
+ 0 -1 2619 1.5689999563619494e-03
+
+ -8.2360003143548965e-03 2.5700899958610535e-01
+ <_>
+
+ 0 -1 2620 -3.5550000611692667e-03
+
+ -4.2430898547172546e-01 7.1174003183841705e-02
+ <_>
+
+ 0 -1 2621 -3.1695000827312469e-02
+
+ -8.5393500328063965e-01 1.6916200518608093e-01
+ <_>
+
+ 0 -1 2622 -3.2097000628709793e-02
+
+ 8.3784902095794678e-01 -1.7597299814224243e-01
+ <_>
+
+ 0 -1 2623 1.5544199943542480e-01
+
+ 9.9550001323223114e-02 2.3873300552368164e+00
+ <_>
+
+ 0 -1 2624 8.8045999407768250e-02
+
+ -1.8725299835205078e-01 6.2384301424026489e-01
+ <_>
+
+ 0 -1 2625 -1.6720000421628356e-03
+
+ 2.5008699297904968e-01 -6.5118998289108276e-02
+ <_>
+
+ 0 -1 2626 9.3409996479749680e-03
+
+ -3.5378900170326233e-01 1.0715000331401825e-01
+ <_>
+
+ 0 -1 2627 3.7138000130653381e-02
+
+ 1.6387000679969788e-01 -9.1718399524688721e-01
+ <_>
+
+ 0 -1 2628 8.0183997750282288e-02
+
+ -1.4812999963760376e-01 1.4895190000534058e+00
+ <_>
+
+ 0 -1 2629 -7.9100002767518163e-04
+
+ -2.1326899528503418e-01 1.9676400721073151e-01
+ <_>
+
+ 0 -1 2630 -5.0400001928210258e-03
+
+ -7.1318697929382324e-01 1.8240000354126096e-03
+ <_>
+
+ 0 -1 2631 1.1962399631738663e-01
+
+ 3.3098999410867691e-02 1.0441709756851196e+00
+ <_>
+
+ 0 -1 2632 -4.5280000194907188e-03
+
+ -2.7308499813079834e-01 2.7229800820350647e-01
+ <_>
+
+ 0 -1 2633 -2.9639000073075294e-02
+
+ 3.6225798726081848e-01 5.6795001029968262e-02
+ <_>
+
+ 0 -1 2634 2.6650000363588333e-02
+
+ -4.8041000962257385e-02 -9.6723502874374390e-01
+ <_>
+
+ 0 -1 2635 4.4422000646591187e-02
+
+ 1.3052900135517120e-01 -3.5077300667762756e-01
+ <_>
+
+ 0 -1 2636 -2.4359999224543571e-02
+
+ -1.0766899585723877e+00 -5.1222998648881912e-02
+ <_>
+
+ 0 -1 2637 1.9734999164938927e-02
+
+ 2.6238000020384789e-02 2.8070500493049622e-01
+ <_>
+
+ 0 -1 2638 5.4930001497268677e-03
+
+ -2.6111298799514771e-01 2.1011400222778320e-01
+ <_>
+
+ 0 -1 2639 -2.3200300335884094e-01
+
+ -1.7748440504074097e+00 1.1482600122690201e-01
+ <_>
+
+ 0 -1 2640 -2.5614000856876373e-02
+
+ 2.9900801181793213e-01 -2.2502499818801880e-01
+ <_>
+
+ 0 -1 2641 -6.4949998632073402e-03
+
+ 1.9563800096511841e-01 -9.9762998521327972e-02
+ <_>
+
+ 0 -1 2642 3.9840000681579113e-03
+
+ -4.3021500110626221e-01 8.1261001527309418e-02
+ <_>
+
+ 0 -1 2643 -3.5813000053167343e-02
+
+ -5.0987398624420166e-01 1.6345900297164917e-01
+ <_>
+
+ 0 -1 2644 -1.4169000089168549e-02
+
+ 7.7978098392486572e-01 -1.7476299405097961e-01
+ <_>
+
+ 0 -1 2645 -1.2642100453376770e-01
+
+ -6.3047897815704346e-01 1.2728300690650940e-01
+ <_>
+
+ 0 -1 2646 6.8677999079227448e-02
+
+ -4.6447999775409698e-02 -1.1128979921340942e+00
+ <_>
+
+ 0 -1 2647 8.5864998400211334e-02
+
+ 1.1835400015115738e-01 -4.8235158920288086e+00
+ <_>
+
+ 0 -1 2648 1.5511999838054180e-02
+
+ -1.7467999830842018e-02 -6.3693398237228394e-01
+ <_>
+
+ 0 -1 2649 8.1091001629829407e-02
+
+ 8.6133003234863281e-02 2.4559431076049805e+00
+ <_>
+
+ 0 -1 2650 1.8495000898838043e-02
+
+ 4.0229000151157379e-02 -5.0858199596405029e-01
+ <_>
+
+ 0 -1 2651 -8.6320996284484863e-02
+
+ -1.9006760120391846e+00 1.1019100248813629e-01
+ <_>
+
+ 0 -1 2652 7.2355002164840698e-02
+
+ -6.2111999839544296e-02 -1.4165179729461670e+00
+ <_>
+
+ 0 -1 2653 -7.8179001808166504e-02
+
+ 8.8849300146102905e-01 4.2369998991489410e-02
+ <_>
+
+ 0 -1 2654 9.6681997179985046e-02
+
+ -2.2094200551509857e-01 3.3575099706649780e-01
+ <_>
+
+ 0 -1 2655 -3.9875999093055725e-02
+
+ 5.7804799079895020e-01 4.5347999781370163e-02
+ <_>
+
+ 0 -1 2656 -9.5349997282028198e-03
+
+ -5.4175698757171631e-01 3.2399999909102917e-03
+ <_>
+
+ 0 -1 2657 4.0600000647827983e-04
+
+ -8.1549003720283508e-02 3.5837900638580322e-01
+ <_>
+
+ 0 -1 2658 1.2107999995350838e-02
+
+ -2.0280399918556213e-01 4.3768000602722168e-01
+ <_>
+
+ 0 -1 2659 -2.0873999223113060e-02
+
+ 4.1469898819923401e-01 -4.5568000525236130e-02
+ <_>
+
+ 0 -1 2660 5.7888001203536987e-02
+
+ -2.9009999707341194e-02 -9.1822302341461182e-01
+ <_>
+
+ 0 -1 2661 1.3200000103097409e-04
+
+ -1.1772400140762329e-01 2.0000000298023224e-01
+ <_>
+
+ 0 -1 2662 -1.7137000337243080e-02
+
+ 3.3004799485206604e-01 -2.3055200278759003e-01
+ <_>
+
+ 0 -1 2663 3.0655000358819962e-02
+
+ -2.1545000374317169e-02 2.6878198981285095e-01
+ <_>
+
+ 0 -1 2664 -7.8699999721720815e-04
+
+ -4.4100698828697205e-01 4.9157999455928802e-02
+ <_>
+
+ 0 -1 2665 8.8036999106407166e-02
+
+ 1.1782000213861465e-01 -2.8293309211730957e+00
+ <_>
+
+ 0 -1 2666 -3.9028998464345932e-02
+
+ 9.1777199506759644e-01 -1.5827399492263794e-01
+ <_>
+
+ 0 -1 2667 8.0105997622013092e-02
+
+ 1.1289200186729431e-01 -1.9937280416488647e+00
+ <_>
+
+ 0 -1 2668 3.9538998156785965e-02
+
+ -1.4357399940490723e-01 1.3085240125656128e+00
+ <_>
+
+ 0 -1 2669 2.0684000104665756e-02
+
+ 2.0048099756240845e-01 -4.4186998158693314e-02
+ <_>
+
+ 0 -1 2670 -6.7037999629974365e-02
+
+ 3.2618600130081177e-01 -2.0550400018692017e-01
+ <_>
+
+ 0 -1 2671 4.6815000474452972e-02
+
+ 1.5825299918651581e-01 -9.5535099506378174e-01
+ <_>
+
+ 0 -1 2672 7.8443996608257294e-02
+
+ -7.4651002883911133e-02 -2.1161499023437500e+00
+ <_>
+
+ 0 -1 2673 6.6380001604557037e-02
+
+ 1.1641900241374969e-01 -1.6113519668579102e+00
+ <_>
+
+ 0 -1 2674 3.0053999274969101e-02
+
+ -1.6562600433826447e-01 7.0025402307510376e-01
+ <_>
+
+ 0 -1 2675 1.7119999974966049e-02
+
+ 2.2627699375152588e-01 -4.0114998817443848e-01
+ <_>
+
+ 0 -1 2676 2.0073000341653824e-02
+
+ -1.9389699399471283e-01 4.4420298933982849e-01
+ <_>
+
+ 0 -1 2677 3.3101998269557953e-02
+
+ 1.1637499928474426e-01 -1.5771679878234863e+00
+ <_>
+
+ 0 -1 2678 -1.4882000163197517e-02
+
+ -8.9680302143096924e-01 -4.2010001838207245e-02
+ <_>
+
+ 0 -1 2679 -1.0281000286340714e-02
+
+ 3.5602998733520508e-01 -1.3124000281095505e-02
+ <_>
+
+ 0 -1 2680 -2.8695000335574150e-02
+
+ -4.6039599180221558e-01 2.6801999658346176e-02
+ <_>
+
+ 0 -1 2681 -4.7189998440444469e-03
+
+ 2.3788799345493317e-01 -6.5518997609615326e-02
+ <_>
+
+ 0 -1 2682 3.2201600074768066e-01
+
+ -2.8489999473094940e-02 -8.4234601259231567e-01
+ <_>
+
+ 0 -1 2683 -1.7045000568032265e-02
+
+ -5.0938802957534790e-01 1.6057600080966949e-01
+ <_>
+
+ 0 -1 2684 -7.3469998314976692e-03
+
+ -5.4154998064041138e-01 4.7320001758635044e-03
+ <_>
+
+ 0 -1 2685 -3.0001999810338020e-02
+
+ -8.8785797357559204e-01 1.3621799647808075e-01
+ <_>
+
+ 0 -1 2686 -1.1292999610304832e-02
+
+ 8.0615198612213135e-01 -1.6159500181674957e-01
+ <_>
+
+ 0 -1 2687 4.7749998047947884e-03
+
+ 1.2968000024557114e-02 5.5079901218414307e-01
+ <_>
+
+ 0 -1 2688 5.0710001960396767e-03
+
+ -4.5728001743555069e-02 -1.0766259431838989e+00
+ <_>
+
+ 0 -1 2689 1.9344100356101990e-01
+
+ 7.1262001991271973e-02 1.1694519519805908e+00
+ <_>
+
+ 0 -1 2690 5.3750001825392246e-03
+
+ -1.9736200571060181e-01 3.8206899166107178e-01
+ <_>
+
+ 0 -1 2691 -6.8276003003120422e-02
+
+ -5.4372339248657227e+00 1.1151900142431259e-01
+ <_>
+
+ 0 -1 2692 -3.4933000802993774e-02
+
+ 4.4793400168418884e-01 -1.8657900393009186e-01
+ <_>
+
+ 0 -1 2693 5.1219998858869076e-03
+
+ -1.4871999621391296e-02 1.8413899838924408e-01
+ <_>
+
+ 0 -1 2694 9.5311999320983887e-02
+
+ -1.5117099881172180e-01 9.4991499185562134e-01
+ <_>
+
+ 0 -1 2695 -6.2849000096321106e-02
+
+ 4.6473601460456848e-01 3.8405001163482666e-02
+ <_>
+
+ 0 -1 2696 -1.7040699720382690e-01
+
+ -1.6499999761581421e+00 -6.3236996531486511e-02
+ <_>
+
+ 0 -1 2697 1.0583999566733837e-02
+
+ -3.8348998874425888e-02 4.1913801431655884e-01
+ <_>
+
+ 0 -1 2698 -4.1579000651836395e-02
+
+ 3.4461900591850281e-01 -2.1187700331211090e-01
+ <_>
+
+ 0 -1 2699 1.2718600034713745e-01
+
+ 1.2398199737071991e-01 -2.1254889965057373e+00
+ <_>
+
+ 0 -1 2700 8.2557000219821930e-02
+
+ -6.2024001032114029e-02 -1.4875819683074951e+00
+ <_>
+
+ 0 -1 2701 8.5293002426624298e-02
+
+ 1.7087999731302261e-02 3.2076600193977356e-01
+ <_>
+
+ 0 -1 2702 5.5544000118970871e-02
+
+ -2.7414000034332275e-01 1.8976399302482605e-01
+ <_>
+
+ 0 -1 2703 4.5650000683963299e-03
+
+ -1.7920200526714325e-01 2.7967301011085510e-01
+ <_>
+
+ 0 -1 2704 1.2997999787330627e-02
+
+ -3.2297500967979431e-01 2.6941800117492676e-01
+ <_>
+
+ 0 -1 2705 5.7891998440027237e-02
+
+ 1.2644399702548981e-01 -6.0713499784469604e-01
+ <_>
+
+ 0 -1 2706 -2.2824000567197800e-02
+
+ -4.9682098627090454e-01 2.2376999258995056e-02
+ <_>
+
+ 0 -1 2707 4.8312000930309296e-02
+
+ 4.3607000261545181e-02 4.8537799715995789e-01
+ <_>
+
+ 0 -1 2708 2.5714000687003136e-02
+
+ -4.2950998991727829e-02 -9.3023502826690674e-01
+ <_>
+
+ 0 -1 2709 6.9269998930394650e-03
+
+ -2.9680000152438879e-03 3.4296301007270813e-01
+ <_>
+
+ 0 -1 2710 -3.4446999430656433e-02
+
+ -1.5299769639968872e+00 -6.1014998704195023e-02
+ <_>
+
+ 0 -1 2711 2.9387999325990677e-02
+
+ 3.7595998495817184e-02 6.4172399044036865e-01
+ <_>
+
+ 0 -1 2712 -2.4319998919963837e-03
+
+ 9.9088996648788452e-02 -3.9688101410865784e-01
+ <_>
+ 200
+ -2.9928278923034668e+00
+
+ <_>
+
+ 0 -1 2713 -9.5944002270698547e-02
+
+ 6.2419098615646362e-01 -4.5875200629234314e-01
+ <_>
+
+ 0 -1 2714 1.6834000125527382e-02
+
+ -9.3072801828384399e-01 2.1563600003719330e-01
+ <_>
+
+ 0 -1 2715 2.6049999520182610e-02
+
+ -4.0532299876213074e-01 4.2256599664688110e-01
+ <_>
+
+ 0 -1 2716 3.6500001442618668e-04
+
+ 9.5288001000881195e-02 -6.3298100233078003e-01
+ <_>
+
+ 0 -1 2717 -6.6940002143383026e-03
+
+ 3.7243801355361938e-01 -3.0332401394844055e-01
+ <_>
+
+ 0 -1 2718 1.8874000757932663e-02
+
+ -2.3357200622558594e-01 4.0330699086189270e-01
+ <_>
+
+ 0 -1 2719 -1.6300000424962491e-04
+
+ 4.2886998504400253e-02 -7.7796798944473267e-01
+ <_>
+
+ 0 -1 2720 -7.6259002089500427e-02
+
+ -4.9628499150276184e-01 1.6335399448871613e-01
+ <_>
+
+ 0 -1 2721 5.0149001181125641e-02
+
+ 3.2747000455856323e-02 -8.0047899484634399e-01
+ <_>
+
+ 0 -1 2722 -2.9239999130368233e-03
+
+ -5.0002801418304443e-01 2.5480601191520691e-01
+ <_>
+
+ 0 -1 2723 1.6243999823927879e-02
+
+ 3.8913000375032425e-02 -7.0724898576736450e-01
+ <_>
+
+ 0 -1 2724 3.7811998277902603e-02
+
+ -6.6267997026443481e-02 7.3868799209594727e-01
+ <_>
+
+ 0 -1 2725 -1.2319999746978283e-02
+
+ 4.8696398735046387e-01 -2.4485599994659424e-01
+ <_>
+
+ 0 -1 2726 5.8003999292850494e-02
+
+ 1.3459099829196930e-01 -1.3232100009918213e-01
+ <_>
+
+ 0 -1 2727 4.8630000092089176e-03
+
+ -4.4172900915145874e-01 1.4005599915981293e-01
+ <_>
+
+ 0 -1 2728 4.5690998435020447e-02
+
+ 3.1217999756336212e-02 8.9818298816680908e-01
+ <_>
+
+ 0 -1 2729 2.1321000531315804e-02
+
+ 1.2008000165224075e-02 -8.6066198348999023e-01
+ <_>
+
+ 0 -1 2730 1.5679100155830383e-01
+
+ 1.4055999927222729e-02 8.5332900285720825e-01
+ <_>
+
+ 0 -1 2731 -1.0328999720513821e-02
+
+ 2.9022800922393799e-01 -2.9478800296783447e-01
+ <_>
+
+ 0 -1 2732 2.4290001019835472e-03
+
+ -4.0439900755882263e-01 1.9400200247764587e-01
+ <_>
+
+ 0 -1 2733 -2.3338999599218369e-02
+
+ 3.2945200800895691e-01 -2.5712698698043823e-01
+ <_>
+
+ 0 -1 2734 -6.8970001302659512e-03
+
+ -5.3352999687194824e-01 2.1635200083255768e-01
+ <_>
+
+ 0 -1 2735 -3.4403000026941299e-02
+
+ -1.4425489902496338e+00 -4.4682998210191727e-02
+ <_>
+
+ 0 -1 2736 -2.1235000342130661e-02
+
+ -7.9017502069473267e-01 1.9084100425243378e-01
+ <_>
+
+ 0 -1 2737 2.0620001014322042e-03
+
+ -2.6931199431419373e-01 3.1488001346588135e-01
+ <_>
+
+ 0 -1 2738 -4.2190002277493477e-03
+
+ -5.4464399814605713e-01 1.6574600338935852e-01
+ <_>
+
+ 0 -1 2739 -1.4334999956190586e-02
+
+ 2.2105000913143158e-02 -6.2342500686645508e-01
+ <_>
+
+ 0 -1 2740 -8.2120001316070557e-03
+
+ -4.9884998798370361e-01 1.9237099587917328e-01
+ <_>
+
+ 0 -1 2741 -9.3350000679492950e-03
+
+ -7.9131197929382324e-01 -1.4143999665975571e-02
+ <_>
+
+ 0 -1 2742 -3.7937998771667480e-02
+
+ 7.9841297864913940e-01 -3.3799000084400177e-02
+ <_>
+
+ 0 -1 2743 4.7059999778866768e-03
+
+ -3.3163401484489441e-01 2.0726299285888672e-01
+ <_>
+
+ 0 -1 2744 -4.4499998912215233e-03
+
+ -2.7256301045417786e-01 1.8402199447154999e-01
+ <_>
+
+ 0 -1 2745 5.2189999260008335e-03
+
+ -5.3096002340316772e-01 5.2607998251914978e-02
+ <_>
+
+ 0 -1 2746 -9.5399999991059303e-03
+
+ -5.6485402584075928e-01 1.9269399344921112e-01
+ <_>
+
+ 0 -1 2747 4.4969998300075531e-02
+
+ -1.7411500215530396e-01 9.5382601022720337e-01
+ <_>
+
+ 0 -1 2748 1.4209000393748283e-02
+
+ -9.1949000954627991e-02 2.4836100637912750e-01
+ <_>
+
+ 0 -1 2749 1.6380199790000916e-01
+
+ -5.8497000485658646e-02 -1.6404409408569336e+00
+ <_>
+
+ 0 -1 2750 2.5579999200999737e-03
+
+ 2.3447999358177185e-01 -9.2734001576900482e-02
+ <_>
+
+ 0 -1 2751 -3.8499999791383743e-03
+
+ 1.7880700528621674e-01 -3.5844099521636963e-01
+ <_>
+
+ 0 -1 2752 -2.5221999734640121e-02
+
+ -4.2903000116348267e-01 2.0244500041007996e-01
+ <_>
+
+ 0 -1 2753 -1.9415000453591347e-02
+
+ 5.8016300201416016e-01 -1.8806399405002594e-01
+ <_>
+
+ 0 -1 2754 1.4419999904930592e-02
+
+ 3.2846998423337936e-02 8.1980502605438232e-01
+ <_>
+
+ 0 -1 2755 5.1582999527454376e-02
+
+ 6.9176003336906433e-02 -4.5866298675537109e-01
+ <_>
+
+ 0 -1 2756 -3.7960000336170197e-02
+
+ -1.2553000450134277e+00 1.4332899451255798e-01
+ <_>
+
+ 0 -1 2757 -2.9560999944806099e-02
+
+ 5.3151798248291016e-01 -2.0596499741077423e-01
+ <_>
+
+ 0 -1 2758 -3.9110999554395676e-02
+
+ 1.1658719778060913e+00 5.3897000849246979e-02
+ <_>
+
+ 0 -1 2759 -2.9159000143408775e-02
+
+ 3.9307600259780884e-01 -2.2184500098228455e-01
+ <_>
+
+ 0 -1 2760 -8.3617001771926880e-02
+
+ -7.3744499683380127e-01 1.4268200099468231e-01
+ <_>
+
+ 0 -1 2761 4.2004001140594482e-01
+
+ -1.4277400076389313e-01 1.7894840240478516e+00
+ <_>
+
+ 0 -1 2762 6.0005001723766327e-02
+
+ 1.1976700276136398e-01 -1.8886189460754395e+00
+ <_>
+
+ 0 -1 2763 -1.8981000408530235e-02
+
+ -1.4148449897766113e+00 -5.6522998958826065e-02
+ <_>
+
+ 0 -1 2764 -6.0049998573958874e-03
+
+ 4.4170799851417542e-01 -1.0200800001621246e-01
+ <_>
+
+ 0 -1 2765 -5.8214001357555389e-02
+
+ -1.3918470144271851e+00 -4.8268999904394150e-02
+ <_>
+
+ 0 -1 2766 -1.2271000072360039e-02
+
+ 5.1317697763442993e-01 -9.3696996569633484e-02
+ <_>
+
+ 0 -1 2767 4.6585999429225922e-02
+
+ -5.7484000921249390e-02 -1.4283169507980347e+00
+ <_>
+
+ 0 -1 2768 1.2110000243410468e-03
+
+ -8.0891996622085571e-02 3.2333201169967651e-01
+ <_>
+
+ 0 -1 2769 -8.8642001152038574e-02
+
+ -8.6449098587036133e-01 -3.3146999776363373e-02
+ <_>
+
+ 0 -1 2770 -2.3184999823570251e-02
+
+ 5.2162200212478638e-01 -1.6168000176548958e-02
+ <_>
+
+ 0 -1 2771 4.3090000748634338e-02
+
+ -1.6153800487518311e-01 1.0915000438690186e+00
+ <_>
+
+ 0 -1 2772 2.0599999697878957e-04
+
+ -1.7091499269008636e-01 3.1236699223518372e-01
+ <_>
+
+ 0 -1 2773 8.9159999042749405e-03
+
+ -6.7039998248219490e-03 -6.8810397386550903e-01
+ <_>
+
+ 0 -1 2774 -1.7752999439835548e-02
+
+ 6.3292801380157471e-01 -4.2360001243650913e-03
+ <_>
+
+ 0 -1 2775 6.2299999408423901e-03
+
+ -3.3637198805809021e-01 1.2790599465370178e-01
+ <_>
+
+ 0 -1 2776 2.2770000621676445e-02
+
+ -3.4703999757766724e-02 3.9141800999641418e-01
+ <_>
+
+ 0 -1 2777 -2.1534999832510948e-02
+
+ 6.4765101671218872e-01 -2.0097799599170685e-01
+ <_>
+
+ 0 -1 2778 6.1758998781442642e-02
+
+ 5.4297000169754028e-02 9.0700101852416992e-01
+ <_>
+
+ 0 -1 2779 -7.8069999814033508e-02
+
+ 6.5523397922515869e-01 -1.9754399359226227e-01
+ <_>
+
+ 0 -1 2780 1.1315000243484974e-02
+
+ 1.9385300576686859e-01 -5.1707297563552856e-01
+ <_>
+
+ 0 -1 2781 -2.5590000674128532e-02
+
+ -9.3096500635147095e-01 -3.1546998769044876e-02
+ <_>
+
+ 0 -1 2782 -3.8058999925851822e-02
+
+ -6.8326902389526367e-01 1.2709100544452667e-01
+ <_>
+
+ 0 -1 2783 9.7970003262162209e-03
+
+ 1.5523999929428101e-02 -6.3347899913787842e-01
+ <_>
+
+ 0 -1 2784 -1.3841999694705009e-02
+
+ 1.0060529708862305e+00 6.2812998890876770e-02
+ <_>
+
+ 0 -1 2785 8.3459997549653053e-03
+
+ -2.3383200168609619e-01 3.0982699990272522e-01
+ <_>
+
+ 0 -1 2786 -7.1439996361732483e-02
+
+ -7.2505402565002441e-01 1.7148299515247345e-01
+ <_>
+
+ 0 -1 2787 1.0006000287830830e-02
+
+ -2.2071999311447144e-01 3.5266199707984924e-01
+ <_>
+
+ 0 -1 2788 1.1005300283432007e-01
+
+ 1.6662000119686127e-01 -7.4318999052047729e-01
+ <_>
+
+ 0 -1 2789 3.5310998558998108e-02
+
+ -2.3982700705528259e-01 4.1435998678207397e-01
+ <_>
+
+ 0 -1 2790 -1.1174699664115906e-01
+
+ 5.1045399904251099e-01 2.2319999989122152e-03
+ <_>
+
+ 0 -1 2791 -1.1367800086736679e-01
+
+ 9.0475201606750488e-01 -1.6615299880504608e-01
+ <_>
+
+ 0 -1 2792 1.6667999327182770e-02
+
+ 1.4024500548839569e-01 -5.2178502082824707e-01
+ <_>
+
+ 0 -1 2793 -8.0340001732110977e-03
+
+ -6.6178399324417114e-01 3.7640000227838755e-03
+ <_>
+
+ 0 -1 2794 -3.3096998929977417e-02
+
+ 8.0185902118682861e-01 5.9385001659393311e-02
+ <_>
+
+ 0 -1 2795 1.2547999620437622e-02
+
+ -3.3545500040054321e-01 1.4578600227832794e-01
+ <_>
+
+ 0 -1 2796 -4.2073998600244522e-02
+
+ -5.5509102344512939e-01 1.3266600668430328e-01
+ <_>
+
+ 0 -1 2797 2.5221999734640121e-02
+
+ -6.1631999909877777e-02 -1.3678770065307617e+00
+ <_>
+
+ 0 -1 2798 -2.4268999695777893e-02
+
+ 3.4185099601745605e-01 -7.4160001240670681e-03
+ <_>
+
+ 0 -1 2799 -1.2280000373721123e-02
+
+ 2.7745801210403442e-01 -3.1033900380134583e-01
+ <_>
+
+ 0 -1 2800 -1.1377099901437759e-01
+
+ 1.1719540357589722e+00 8.3681002259254456e-02
+ <_>
+
+ 0 -1 2801 -8.4771998226642609e-02
+
+ 8.1694799661636353e-01 -1.7837500572204590e-01
+ <_>
+
+ 0 -1 2802 -2.4552000686526299e-02
+
+ -1.8627299368381500e-01 1.4340099692344666e-01
+ <_>
+
+ 0 -1 2803 -9.0269995853304863e-03
+
+ 3.2659199833869934e-01 -2.3541299998760223e-01
+ <_>
+
+ 0 -1 2804 1.1177999898791313e-02
+
+ 1.9761200249195099e-01 -2.1701000630855560e-02
+ <_>
+
+ 0 -1 2805 -2.9366999864578247e-02
+
+ -9.3414801359176636e-01 -2.1704999729990959e-02
+ <_>
+
+ 0 -1 2806 6.3640000298619270e-03
+
+ 2.5573000311851501e-02 4.6412798762321472e-01
+ <_>
+
+ 0 -1 2807 1.4026000164449215e-02
+
+ -2.1228599548339844e-01 4.0078800916671753e-01
+ <_>
+
+ 0 -1 2808 -1.3341999612748623e-02
+
+ 7.4202698469161987e-01 2.9001999646425247e-02
+ <_>
+
+ 0 -1 2809 2.8422799706459045e-01
+
+ -1.9243599474430084e-01 4.3631199002265930e-01
+ <_>
+
+ 0 -1 2810 -2.3724000155925751e-01
+
+ 6.9736397266387939e-01 6.9307997822761536e-02
+ <_>
+
+ 0 -1 2811 -1.1169700324535370e-01
+
+ 3.9147201180458069e-01 -2.0922000706195831e-01
+ <_>
+
+ 0 -1 2812 1.2787500023841858e-01
+
+ -7.2555996477603912e-02 3.6088201403617859e-01
+ <_>
+
+ 0 -1 2813 -6.2900997698307037e-02
+
+ 9.5424997806549072e-01 -1.5402799844741821e-01
+ <_>
+
+ 0 -1 2814 1.7439000308513641e-02
+
+ -5.1134999841451645e-02 2.7750301361083984e-01
+ <_>
+
+ 0 -1 2815 1.2319999514147639e-03
+
+ 7.5627997517585754e-02 -3.6456099152565002e-01
+ <_>
+
+ 0 -1 2816 2.7495000511407852e-02
+
+ 5.1844000816345215e-02 4.1562598943710327e-01
+ <_>
+
+ 0 -1 2817 -4.3543998152017593e-02
+
+ 7.1969997882843018e-01 -1.7132200300693512e-01
+ <_>
+
+ 0 -1 2818 1.1025999672710896e-02
+
+ 1.4354600012302399e-01 -6.5403002500534058e-01
+ <_>
+
+ 0 -1 2819 2.0865999162197113e-02
+
+ 4.0089000016450882e-02 -4.5743298530578613e-01
+ <_>
+
+ 0 -1 2820 -2.2304000332951546e-02
+
+ 5.3855001926422119e-01 7.1662999689579010e-02
+ <_>
+
+ 0 -1 2821 3.2492000609636307e-02
+
+ -4.5991998165845871e-02 -1.0047069787979126e+00
+ <_>
+
+ 0 -1 2822 1.2269999831914902e-02
+
+ 3.4334998577833176e-02 4.2431798577308655e-01
+ <_>
+
+ 0 -1 2823 8.3820000290870667e-03
+
+ -2.5850600004196167e-01 2.6263499259948730e-01
+ <_>
+
+ 0 -1 2824 3.7353999912738800e-02
+
+ 1.5692499279975891e-01 -1.0429090261459351e+00
+ <_>
+
+ 0 -1 2825 -1.4111000113189220e-02
+
+ -7.3177701234817505e-01 -2.0276999101042747e-02
+ <_>
+
+ 0 -1 2826 5.7066999375820160e-02
+
+ 8.3360001444816589e-02 1.5661499500274658e+00
+ <_>
+
+ 0 -1 2827 4.9680001102387905e-03
+
+ -3.5318198800086975e-01 1.4698399603366852e-01
+ <_>
+
+ 0 -1 2828 -2.4492999538779259e-02
+
+ 2.8325900435447693e-01 -3.4640000667423010e-03
+ <_>
+
+ 0 -1 2829 -1.1254999786615372e-02
+
+ -8.4017497301101685e-01 -3.6251999437808990e-02
+ <_>
+
+ 0 -1 2830 3.4533001482486725e-02
+
+ 1.4998500049114227e-01 -8.7367099523544312e-01
+ <_>
+
+ 0 -1 2831 2.4303000420331955e-02
+
+ -1.8787500262260437e-01 5.9483999013900757e-01
+ <_>
+
+ 0 -1 2832 -7.8790001571178436e-03
+
+ 4.4315698742866516e-01 -5.6570999324321747e-02
+ <_>
+
+ 0 -1 2833 3.5142000764608383e-02
+
+ -5.6494999676942825e-02 -1.3617190122604370e+00
+ <_>
+
+ 0 -1 2834 4.6259998343884945e-03
+
+ -3.1161698698997498e-01 2.5447699427604675e-01
+ <_>
+
+ 0 -1 2835 -8.3131000399589539e-02
+
+ 1.6424349546432495e+00 -1.4429399371147156e-01
+ <_>
+
+ 0 -1 2836 -1.4015999622642994e-02
+
+ -7.7819502353668213e-01 1.7173300683498383e-01
+ <_>
+
+ 0 -1 2837 1.2450000504031777e-03
+
+ -2.3191399872303009e-01 2.8527900576591492e-01
+ <_>
+
+ 0 -1 2838 -1.6803000122308731e-02
+
+ -3.5965099930763245e-01 2.0412999391555786e-01
+ <_>
+
+ 0 -1 2839 -7.6747998595237732e-02
+
+ 7.8050500154495239e-01 -1.5612800419330597e-01
+ <_>
+
+ 0 -1 2840 -2.3671999573707581e-01
+
+ 1.1813700199127197e+00 7.8111998736858368e-02
+ <_>
+
+ 0 -1 2841 -1.0057400166988373e-01
+
+ -4.7104099392890930e-01 7.9172998666763306e-02
+ <_>
+
+ 0 -1 2842 1.3239999534562230e-03
+
+ 2.2262699902057648e-01 -3.7099799513816833e-01
+ <_>
+
+ 0 -1 2843 2.2152999415993690e-02
+
+ -3.8649000227451324e-02 -9.2274999618530273e-01
+ <_>
+
+ 0 -1 2844 -1.1246199905872345e-01
+
+ 4.1899600625038147e-01 8.0411002039909363e-02
+ <_>
+
+ 0 -1 2845 1.6481000930070877e-02
+
+ -1.6756699979305267e-01 7.1842402219772339e-01
+ <_>
+
+ 0 -1 2846 6.8113997578620911e-02
+
+ 1.5719899535179138e-01 -8.7681102752685547e-01
+ <_>
+
+ 0 -1 2847 1.6011999920010567e-02
+
+ -4.1600000113248825e-03 -5.9327799081802368e-01
+ <_>
+
+ 0 -1 2848 4.6640001237392426e-03
+
+ -3.0153999105095863e-02 4.8345300555229187e-01
+ <_>
+
+ 0 -1 2849 6.7579997703433037e-03
+
+ -2.2667400538921356e-01 3.3662301301956177e-01
+ <_>
+
+ 0 -1 2850 4.7289999201893806e-03
+
+ -6.0373999178409576e-02 3.1458100676536560e-01
+ <_>
+
+ 0 -1 2851 2.5869999080896378e-03
+
+ -2.9872599244117737e-01 1.7787499725818634e-01
+ <_>
+
+ 0 -1 2852 2.8989999555051327e-03
+
+ 2.1890200674533844e-01 -2.9567098617553711e-01
+ <_>
+
+ 0 -1 2853 -3.0053999274969101e-02
+
+ 1.2150429487228394e+00 -1.4354999363422394e-01
+ <_>
+
+ 0 -1 2854 1.4181000180542469e-02
+
+ 1.2451999820768833e-02 5.5490100383758545e-01
+ <_>
+
+ 0 -1 2855 -6.0527000576257706e-02
+
+ -1.4933999776840210e+00 -6.5227001905441284e-02
+ <_>
+
+ 0 -1 2856 -1.9882999360561371e-02
+
+ -3.8526400923728943e-01 1.9761200249195099e-01
+ <_>
+
+ 0 -1 2857 3.1218999996781349e-02
+
+ -2.1281200647354126e-01 2.9446500539779663e-01
+ <_>
+
+ 0 -1 2858 1.8271999433636665e-02
+
+ 9.7200000891461968e-04 6.6814202070236206e-01
+ <_>
+
+ 0 -1 2859 1.1089999461546540e-03
+
+ -6.2467902898788452e-01 -1.6599999507889152e-03
+ <_>
+
+ 0 -1 2860 -3.6713998764753342e-02
+
+ -4.2333900928497314e-01 1.2084700167179108e-01
+ <_>
+
+ 0 -1 2861 1.2044000439345837e-02
+
+ 2.5882000103592873e-02 -5.0732398033142090e-01
+ <_>
+
+ 0 -1 2862 7.4749000370502472e-02
+
+ 1.3184699416160583e-01 -2.1739600598812103e-01
+ <_>
+
+ 0 -1 2863 -2.3473200201988220e-01
+
+ 1.1775610446929932e+00 -1.5114699304103851e-01
+ <_>
+
+ 0 -1 2864 1.4096499979496002e-01
+
+ 3.3991001546382904e-02 3.9923098683357239e-01
+ <_>
+
+ 0 -1 2865 6.1789997853338718e-03
+
+ -3.1806701421737671e-01 1.1681699752807617e-01
+ <_>
+
+ 0 -1 2866 -5.7216998189687729e-02
+
+ 8.4399098157882690e-01 8.3889000117778778e-02
+ <_>
+
+ 0 -1 2867 -5.5227000266313553e-02
+
+ 3.6888301372528076e-01 -1.8913400173187256e-01
+ <_>
+
+ 0 -1 2868 -2.1583000198006630e-02
+
+ -5.2161800861358643e-01 1.5772600471973419e-01
+ <_>
+
+ 0 -1 2869 2.5747999548912048e-02
+
+ -5.9921998530626297e-02 -1.0674990415573120e+00
+ <_>
+
+ 0 -1 2870 -1.3098999857902527e-02
+
+ 7.8958398103713989e-01 5.2099999040365219e-02
+ <_>
+
+ 0 -1 2871 2.2799998987466097e-03
+
+ -1.1704430580139160e+00 -5.9356998652219772e-02
+ <_>
+
+ 0 -1 2872 8.8060004636645317e-03
+
+ 4.1717998683452606e-02 6.6352599859237671e-01
+ <_>
+
+ 0 -1 2873 -8.9699998497962952e-03
+
+ -3.5862699151039124e-01 6.0458000749349594e-02
+ <_>
+
+ 0 -1 2874 4.0230001322925091e-03
+
+ 2.0979399979114532e-01 -2.4806000292301178e-01
+ <_>
+
+ 0 -1 2875 2.5017000734806061e-02
+
+ -1.8795900046825409e-01 3.9547100663185120e-01
+ <_>
+
+ 0 -1 2876 -5.9009999968111515e-03
+
+ 2.5663900375366211e-01 -9.4919003546237946e-02
+ <_>
+
+ 0 -1 2877 4.3850000947713852e-03
+
+ 3.3139001578092575e-02 -4.6075400710105896e-01
+ <_>
+
+ 0 -1 2878 -3.3771999180316925e-02
+
+ -9.8881602287292480e-01 1.4636899530887604e-01
+ <_>
+
+ 0 -1 2879 4.4523000717163086e-02
+
+ -1.3286699354648590e-01 1.5796790122985840e+00
+ <_>
+
+ 0 -1 2880 -4.0929000824689865e-02
+
+ 3.3877098560333252e-01 7.4970997869968414e-02
+ <_>
+
+ 0 -1 2881 3.9351999759674072e-02
+
+ -1.8327899277210236e-01 4.6980699896812439e-01
+ <_>
+
+ 0 -1 2882 -7.0322997868061066e-02
+
+ -9.8322701454162598e-01 1.1808100342750549e-01
+ <_>
+
+ 0 -1 2883 3.5743001848459244e-02
+
+ -3.3050999045372009e-02 -8.3610898256301880e-01
+ <_>
+
+ 0 -1 2884 -4.2961999773979187e-02
+
+ 1.1670809984207153e+00 8.0692000687122345e-02
+ <_>
+
+ 0 -1 2885 -2.1007999777793884e-02
+
+ 6.3869798183441162e-01 -1.7626300454139709e-01
+ <_>
+
+ 0 -1 2886 -1.5742200613021851e-01
+
+ -2.3302499949932098e-01 1.2517499923706055e-01
+ <_>
+
+ 0 -1 2887 7.8659998252987862e-03
+
+ -2.2037999331951141e-01 2.7196800708770752e-01
+ <_>
+
+ 0 -1 2888 2.3622000589966774e-02
+
+ 1.6127300262451172e-01 -4.3329000473022461e-01
+ <_>
+
+ 0 -1 2889 7.4692003428936005e-02
+
+ -1.6991999745368958e-01 5.8884900808334351e-01
+ <_>
+
+ 0 -1 2890 -6.4799998654052615e-04
+
+ 2.5842899084091187e-01 -3.5911999642848969e-02
+ <_>
+
+ 0 -1 2891 -1.6290999948978424e-02
+
+ -7.6764398813247681e-01 -2.0472999662160873e-02
+ <_>
+
+ 0 -1 2892 -3.3133998513221741e-02
+
+ -2.7180099487304688e-01 1.4325700700283051e-01
+ <_>
+
+ 0 -1 2893 4.8797998577356339e-02
+
+ 7.6408997178077698e-02 -4.1445198655128479e-01
+ <_>
+
+ 0 -1 2894 2.2869999520480633e-03
+
+ -3.8628999143838882e-02 2.0753799378871918e-01
+ <_>
+
+ 0 -1 2895 4.5304000377655029e-02
+
+ -1.7777900397777557e-01 6.3461399078369141e-01
+ <_>
+
+ 0 -1 2896 1.0705800354480743e-01
+
+ 1.8972299993038177e-01 -5.1236200332641602e-01
+ <_>
+
+ 0 -1 2897 -4.0525000542402267e-02
+
+ 7.0614999532699585e-01 -1.7803299427032471e-01
+ <_>
+
+ 0 -1 2898 3.1968999654054642e-02
+
+ 6.8149998784065247e-02 6.8733102083206177e-01
+ <_>
+
+ 0 -1 2899 -5.7617001235485077e-02
+
+ 7.5170499086380005e-01 -1.5764999389648438e-01
+ <_>
+
+ 0 -1 2900 1.3593999668955803e-02
+
+ 1.9411900639533997e-01 -2.4561899900436401e-01
+ <_>
+
+ 0 -1 2901 7.1396000683307648e-02
+
+ -4.6881001442670822e-02 -8.8198298215866089e-01
+ <_>
+
+ 0 -1 2902 -1.4895999804139137e-02
+
+ -4.4532400369644165e-01 1.7679899930953979e-01
+ <_>
+
+ 0 -1 2903 -1.0026000440120697e-02
+
+ 6.5122699737548828e-01 -1.6709999740123749e-01
+ <_>
+
+ 0 -1 2904 3.7589999847114086e-03
+
+ -5.8301001787185669e-02 3.4483298659324646e-01
+ <_>
+
+ 0 -1 2905 1.6263000667095184e-02
+
+ -1.5581500530242920e-01 8.6432701349258423e-01
+ <_>
+
+ 0 -1 2906 -4.0176000446081161e-02
+
+ -6.1028599739074707e-01 1.1796399950981140e-01
+ <_>
+
+ 0 -1 2907 2.7080999687314034e-02
+
+ -4.9601998180150986e-02 -8.9990001916885376e-01
+ <_>
+
+ 0 -1 2908 5.2420001477003098e-02
+
+ 1.1297199875116348e-01 -1.0833640098571777e+00
+ <_>
+
+ 0 -1 2909 -1.9160000607371330e-02
+
+ -7.9880100488662720e-01 -3.4079000353813171e-02
+ <_>
+
+ 0 -1 2910 -3.7730000913143158e-03
+
+ -1.9124099612236023e-01 2.1535199880599976e-01
+ <_>
+
+ 0 -1 2911 7.5762003660202026e-02
+
+ -1.3421699404716492e-01 1.6807060241699219e+00
+ <_>
+
+ 0 -1 2912 -2.2173000499606133e-02
+
+ 4.8600998520851135e-01 3.6160000599920750e-03
+
+ <_>
+
+ <_>
+ 6 4 12 9 -1.
+ <_>
+ 6 7 12 3 3.
+ <_>
+
+ <_>
+ 6 4 12 7 -1.
+ <_>
+ 10 4 4 7 3.
+ <_>
+
+ <_>
+ 3 9 18 9 -1.
+ <_>
+ 3 12 18 3 3.
+ <_>
+
+ <_>
+ 8 18 9 6 -1.
+ <_>
+ 8 20 9 2 3.
+ <_>
+
+ <_>
+ 3 5 4 19 -1.
+ <_>
+ 5 5 2 19 2.
+ <_>
+
+ <_>
+ 6 5 12 16 -1.
+ <_>
+ 6 13 12 8 2.
+ <_>
+
+ <_>
+ 5 8 12 6 -1.
+ <_>
+ 5 11 12 3 2.
+ <_>
+
+ <_>
+ 11 14 4 10 -1.
+ <_>
+ 11 19 4 5 2.
+ <_>
+
+ <_>
+ 4 0 7 6 -1.
+ <_>
+ 4 3 7 3 2.
+ <_>
+
+ <_>
+ 6 6 12 6 -1.
+ <_>
+ 6 8 12 2 3.
+ <_>
+
+ <_>
+ 6 4 12 7 -1.
+ <_>
+ 10 4 4 7 3.
+ <_>
+
+ <_>
+ 1 8 19 12 -1.
+ <_>
+ 1 12 19 4 3.
+ <_>
+
+ <_>
+ 0 2 24 3 -1.
+ <_>
+ 8 2 8 3 3.
+ <_>
+
+ <_>
+ 9 9 6 15 -1.
+ <_>
+ 9 14 6 5 3.
+ <_>
+
+ <_>
+ 5 6 14 10 -1.
+ <_>
+ 5 11 14 5 2.
+ <_>
+
+ <_>
+ 5 0 14 9 -1.
+ <_>
+ 5 3 14 3 3.
+ <_>
+
+ <_>
+ 13 11 9 6 -1.
+ <_>
+ 16 11 3 6 3.
+ <_>
+
+ <_>
+ 7 5 6 10 -1.
+ <_>
+ 9 5 2 10 3.
+ <_>
+
+ <_>
+ 10 8 6 10 -1.
+ <_>
+ 12 8 2 10 3.
+ <_>
+
+ <_>
+ 2 5 4 9 -1.
+ <_>
+ 4 5 2 9 2.
+ <_>
+
+ <_>
+ 18 0 6 11 -1.
+ <_>
+ 20 0 2 11 3.
+ <_>
+
+ <_>
+ 0 6 24 13 -1.
+ <_>
+ 8 6 8 13 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 7 18 10 6 -1.
+ <_>
+ 7 20 10 2 3.
+ <_>
+
+ <_>
+ 5 7 14 12 -1.
+ <_>
+ 5 13 14 6 2.
+ <_>
+
+ <_>
+ 0 3 24 3 -1.
+ <_>
+ 8 3 8 3 3.
+ <_>
+
+ <_>
+ 5 8 15 6 -1.
+ <_>
+ 5 11 15 3 2.
+ <_>
+
+ <_>
+ 9 6 5 14 -1.
+ <_>
+ 9 13 5 7 2.
+ <_>
+
+ <_>
+ 9 5 6 10 -1.
+ <_>
+ 11 5 2 10 3.
+ <_>
+
+ <_>
+ 6 6 3 12 -1.
+ <_>
+ 6 12 3 6 2.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 9 21 6 3 3.
+ <_>
+
+ <_>
+ 5 6 13 6 -1.
+ <_>
+ 5 8 13 2 3.
+ <_>
+
+ <_>
+ 18 1 6 15 -1.
+ <_>
+ 18 1 3 15 2.
+ <_>
+
+ <_>
+ 1 1 6 15 -1.
+ <_>
+ 4 1 3 15 2.
+ <_>
+
+ <_>
+ 0 8 24 15 -1.
+ <_>
+ 8 8 8 15 3.
+ <_>
+
+ <_>
+ 5 6 14 12 -1.
+ <_>
+ 5 6 7 6 2.
+ <_>
+ 12 12 7 6 2.
+ <_>
+
+ <_>
+ 2 12 21 12 -1.
+ <_>
+ 2 16 21 4 3.
+ <_>
+
+ <_>
+ 8 1 4 10 -1.
+ <_>
+ 10 1 2 10 2.
+ <_>
+
+ <_>
+ 2 13 20 10 -1.
+ <_>
+ 2 13 10 10 2.
+ <_>
+
+ <_>
+ 0 1 6 13 -1.
+ <_>
+ 2 1 2 13 3.
+ <_>
+
+ <_>
+ 20 2 4 13 -1.
+ <_>
+ 20 2 2 13 2.
+ <_>
+
+ <_>
+ 0 5 22 19 -1.
+ <_>
+ 11 5 11 19 2.
+ <_>
+
+ <_>
+ 18 4 6 9 -1.
+ <_>
+ 20 4 2 9 3.
+ <_>
+
+ <_>
+ 0 3 6 11 -1.
+ <_>
+ 2 3 2 11 3.
+ <_>
+
+ <_>
+ 12 1 4 9 -1.
+ <_>
+ 12 1 2 9 2.
+ <_>
+
+ <_>
+ 0 6 19 3 -1.
+ <_>
+ 0 7 19 1 3.
+ <_>
+
+ <_>
+ 12 1 4 9 -1.
+ <_>
+ 12 1 2 9 2.
+ <_>
+
+ <_>
+ 8 1 4 9 -1.
+ <_>
+ 10 1 2 9 2.
+ <_>
+
+ <_>
+ 5 5 14 14 -1.
+ <_>
+ 12 5 7 7 2.
+ <_>
+ 5 12 7 7 2.
+ <_>
+
+ <_>
+ 1 10 18 2 -1.
+ <_>
+ 1 11 18 1 2.
+ <_>
+
+ <_>
+ 17 13 4 11 -1.
+ <_>
+ 17 13 2 11 2.
+ <_>
+
+ <_>
+ 0 4 6 9 -1.
+ <_>
+ 0 7 6 3 3.
+ <_>
+
+ <_>
+ 6 4 12 9 -1.
+ <_>
+ 6 7 12 3 3.
+ <_>
+
+ <_>
+ 6 5 12 6 -1.
+ <_>
+ 10 5 4 6 3.
+ <_>
+
+ <_>
+ 0 1 24 5 -1.
+ <_>
+ 8 1 8 5 3.
+ <_>
+
+ <_>
+ 4 10 18 6 -1.
+ <_>
+ 4 12 18 2 3.
+ <_>
+
+ <_>
+ 2 17 12 6 -1.
+ <_>
+ 2 17 6 3 2.
+ <_>
+ 8 20 6 3 2.
+ <_>
+
+ <_>
+ 19 3 4 13 -1.
+ <_>
+ 19 3 2 13 2.
+ <_>
+
+ <_>
+ 1 3 4 13 -1.
+ <_>
+ 3 3 2 13 2.
+ <_>
+
+ <_>
+ 0 1 24 23 -1.
+ <_>
+ 8 1 8 23 3.
+ <_>
+
+ <_>
+ 1 7 8 12 -1.
+ <_>
+ 1 11 8 4 3.
+ <_>
+
+ <_>
+ 14 7 3 14 -1.
+ <_>
+ 14 14 3 7 2.
+ <_>
+
+ <_>
+ 3 12 16 6 -1.
+ <_>
+ 3 12 8 3 2.
+ <_>
+ 11 15 8 3 2.
+ <_>
+
+ <_>
+ 6 6 12 6 -1.
+ <_>
+ 6 8 12 2 3.
+ <_>
+
+ <_>
+ 8 7 6 12 -1.
+ <_>
+ 8 13 6 6 2.
+ <_>
+
+ <_>
+ 15 15 9 6 -1.
+ <_>
+ 15 17 9 2 3.
+ <_>
+
+ <_>
+ 1 17 18 3 -1.
+ <_>
+ 1 18 18 1 3.
+ <_>
+
+ <_>
+ 4 4 16 12 -1.
+ <_>
+ 4 10 16 6 2.
+ <_>
+
+ <_>
+ 0 1 4 20 -1.
+ <_>
+ 2 1 2 20 2.
+ <_>
+
+ <_>
+ 3 0 18 2 -1.
+ <_>
+ 3 1 18 1 2.
+ <_>
+
+ <_>
+ 1 5 20 14 -1.
+ <_>
+ 1 5 10 7 2.
+ <_>
+ 11 12 10 7 2.
+ <_>
+
+ <_>
+ 5 8 14 12 -1.
+ <_>
+ 5 12 14 4 3.
+ <_>
+
+ <_>
+ 3 14 7 9 -1.
+ <_>
+ 3 17 7 3 3.
+ <_>
+
+ <_>
+ 14 15 9 6 -1.
+ <_>
+ 14 17 9 2 3.
+ <_>
+
+ <_>
+ 1 15 9 6 -1.
+ <_>
+ 1 17 9 2 3.
+ <_>
+
+ <_>
+ 11 6 8 10 -1.
+ <_>
+ 15 6 4 5 2.
+ <_>
+ 11 11 4 5 2.
+ <_>
+
+ <_>
+ 5 5 14 14 -1.
+ <_>
+ 5 5 7 7 2.
+ <_>
+ 12 12 7 7 2.
+ <_>
+
+ <_>
+ 6 0 12 5 -1.
+ <_>
+ 10 0 4 5 3.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 9 3 6 3 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 8 6 6 9 -1.
+ <_>
+ 10 6 2 9 3.
+ <_>
+
+ <_>
+ 3 8 18 4 -1.
+ <_>
+ 9 8 6 4 3.
+ <_>
+
+ <_>
+ 6 0 12 9 -1.
+ <_>
+ 6 3 12 3 3.
+ <_>
+
+ <_>
+ 0 0 24 6 -1.
+ <_>
+ 8 0 8 6 3.
+ <_>
+
+ <_>
+ 4 7 16 12 -1.
+ <_>
+ 4 11 16 4 3.
+ <_>
+
+ <_>
+ 11 6 6 6 -1.
+ <_>
+ 11 6 3 6 2.
+ <_>
+
+ <_>
+ 0 20 24 3 -1.
+ <_>
+ 8 20 8 3 3.
+ <_>
+
+ <_>
+ 11 6 4 9 -1.
+ <_>
+ 11 6 2 9 2.
+ <_>
+
+ <_>
+ 4 13 15 4 -1.
+ <_>
+ 9 13 5 4 3.
+ <_>
+
+ <_>
+ 11 6 4 9 -1.
+ <_>
+ 11 6 2 9 2.
+ <_>
+
+ <_>
+ 9 6 4 9 -1.
+ <_>
+ 11 6 2 9 2.
+ <_>
+
+ <_>
+ 9 12 6 12 -1.
+ <_>
+ 9 18 6 6 2.
+ <_>
+
+ <_>
+ 1 22 18 2 -1.
+ <_>
+ 1 23 18 1 2.
+ <_>
+
+ <_>
+ 10 7 4 10 -1.
+ <_>
+ 10 12 4 5 2.
+ <_>
+
+ <_>
+ 6 7 8 10 -1.
+ <_>
+ 6 12 8 5 2.
+ <_>
+
+ <_>
+ 7 6 10 6 -1.
+ <_>
+ 7 8 10 2 3.
+ <_>
+
+ <_>
+ 0 14 10 4 -1.
+ <_>
+ 0 16 10 2 2.
+ <_>
+
+ <_>
+ 6 18 18 2 -1.
+ <_>
+ 6 19 18 1 2.
+ <_>
+
+ <_>
+ 1 1 22 3 -1.
+ <_>
+ 1 2 22 1 3.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 2 4 6 15 -1.
+ <_>
+ 5 4 3 15 2.
+ <_>
+
+ <_>
+ 20 4 4 10 -1.
+ <_>
+ 20 4 2 10 2.
+ <_>
+
+ <_>
+ 0 4 4 10 -1.
+ <_>
+ 2 4 2 10 2.
+ <_>
+
+ <_>
+ 2 16 20 6 -1.
+ <_>
+ 12 16 10 3 2.
+ <_>
+ 2 19 10 3 2.
+ <_>
+
+ <_>
+ 0 12 8 9 -1.
+ <_>
+ 4 12 4 9 2.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 5 10 6 6 -1.
+ <_>
+ 8 10 3 6 2.
+ <_>
+
+ <_>
+ 11 8 12 6 -1.
+ <_>
+ 17 8 6 3 2.
+ <_>
+ 11 11 6 3 2.
+ <_>
+
+ <_>
+ 0 8 12 6 -1.
+ <_>
+ 0 8 6 3 2.
+ <_>
+ 6 11 6 3 2.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 6 0 6 9 -1.
+ <_>
+ 8 0 2 9 3.
+ <_>
+
+ <_>
+ 8 14 9 6 -1.
+ <_>
+ 8 16 9 2 3.
+ <_>
+
+ <_>
+ 0 16 9 6 -1.
+ <_>
+ 0 18 9 2 3.
+ <_>
+
+ <_>
+ 10 8 6 10 -1.
+ <_>
+ 12 8 2 10 3.
+ <_>
+
+ <_>
+ 3 19 12 3 -1.
+ <_>
+ 9 19 6 3 2.
+ <_>
+
+ <_>
+ 2 10 20 2 -1.
+ <_>
+ 2 11 20 1 2.
+ <_>
+
+ <_>
+ 2 9 18 12 -1.
+ <_>
+ 2 9 9 6 2.
+ <_>
+ 11 15 9 6 2.
+ <_>
+
+ <_>
+ 3 0 18 24 -1.
+ <_>
+ 3 0 9 24 2.
+ <_>
+
+ <_>
+ 5 6 14 10 -1.
+ <_>
+ 5 6 7 5 2.
+ <_>
+ 12 11 7 5 2.
+ <_>
+
+ <_>
+ 9 5 10 12 -1.
+ <_>
+ 14 5 5 6 2.
+ <_>
+ 9 11 5 6 2.
+ <_>
+
+ <_>
+ 4 5 12 12 -1.
+ <_>
+ 4 5 6 6 2.
+ <_>
+ 10 11 6 6 2.
+ <_>
+
+ <_>
+ 4 14 18 3 -1.
+ <_>
+ 4 15 18 1 3.
+ <_>
+
+ <_>
+ 6 13 8 8 -1.
+ <_>
+ 6 17 8 4 2.
+ <_>
+
+ <_>
+ 3 16 18 6 -1.
+ <_>
+ 3 19 18 3 2.
+ <_>
+
+ <_>
+ 0 0 6 6 -1.
+ <_>
+ 3 0 3 6 2.
+ <_>
+
+ <_>
+ 6 6 12 18 -1.
+ <_>
+ 10 6 4 18 3.
+ <_>
+
+ <_>
+ 6 1 4 14 -1.
+ <_>
+ 8 1 2 14 2.
+ <_>
+
+ <_>
+ 3 2 19 2 -1.
+ <_>
+ 3 3 19 1 2.
+ <_>
+
+ <_>
+ 1 8 22 13 -1.
+ <_>
+ 12 8 11 13 2.
+ <_>
+
+ <_>
+ 8 9 11 4 -1.
+ <_>
+ 8 11 11 2 2.
+ <_>
+
+ <_>
+ 0 12 15 10 -1.
+ <_>
+ 5 12 5 10 3.
+ <_>
+
+ <_>
+ 12 16 12 6 -1.
+ <_>
+ 16 16 4 6 3.
+ <_>
+
+ <_>
+ 0 16 12 6 -1.
+ <_>
+ 4 16 4 6 3.
+ <_>
+
+ <_>
+ 19 1 5 12 -1.
+ <_>
+ 19 5 5 4 3.
+ <_>
+
+ <_>
+ 0 2 24 4 -1.
+ <_>
+ 8 2 8 4 3.
+ <_>
+
+ <_>
+ 6 8 12 4 -1.
+ <_>
+ 6 10 12 2 2.
+ <_>
+
+ <_>
+ 7 5 9 6 -1.
+ <_>
+ 10 5 3 6 3.
+ <_>
+
+ <_>
+ 9 17 6 6 -1.
+ <_>
+ 9 20 6 3 2.
+ <_>
+
+ <_>
+ 0 7 22 15 -1.
+ <_>
+ 0 12 22 5 3.
+ <_>
+
+ <_>
+ 4 1 17 9 -1.
+ <_>
+ 4 4 17 3 3.
+ <_>
+
+ <_>
+ 7 5 6 10 -1.
+ <_>
+ 9 5 2 10 3.
+ <_>
+
+ <_>
+ 18 1 6 8 -1.
+ <_>
+ 18 1 3 8 2.
+ <_>
+
+ <_>
+ 0 1 6 7 -1.
+ <_>
+ 3 1 3 7 2.
+ <_>
+
+ <_>
+ 18 0 6 22 -1.
+ <_>
+ 18 0 3 22 2.
+ <_>
+
+ <_>
+ 0 0 6 22 -1.
+ <_>
+ 3 0 3 22 2.
+ <_>
+
+ <_>
+ 16 7 8 16 -1.
+ <_>
+ 16 7 4 16 2.
+ <_>
+
+ <_>
+ 2 10 19 6 -1.
+ <_>
+ 2 12 19 2 3.
+ <_>
+
+ <_>
+ 9 9 6 12 -1.
+ <_>
+ 9 13 6 4 3.
+ <_>
+
+ <_>
+ 2 15 17 6 -1.
+ <_>
+ 2 17 17 2 3.
+ <_>
+
+ <_>
+ 14 7 3 14 -1.
+ <_>
+ 14 14 3 7 2.
+ <_>
+
+ <_>
+ 5 6 8 10 -1.
+ <_>
+ 5 6 4 5 2.
+ <_>
+ 9 11 4 5 2.
+ <_>
+
+ <_>
+ 15 8 9 11 -1.
+ <_>
+ 18 8 3 11 3.
+ <_>
+
+ <_>
+ 0 8 9 11 -1.
+ <_>
+ 3 8 3 11 3.
+ <_>
+
+ <_>
+ 8 6 10 18 -1.
+ <_>
+ 8 15 10 9 2.
+ <_>
+
+ <_>
+ 7 7 3 14 -1.
+ <_>
+ 7 14 3 7 2.
+ <_>
+
+ <_>
+ 0 14 24 8 -1.
+ <_>
+ 8 14 8 8 3.
+ <_>
+
+ <_>
+ 1 10 18 14 -1.
+ <_>
+ 10 10 9 14 2.
+ <_>
+
+ <_>
+ 14 12 6 6 -1.
+ <_>
+ 14 15 6 3 2.
+ <_>
+
+ <_>
+ 7 0 10 16 -1.
+ <_>
+ 7 0 5 8 2.
+ <_>
+ 12 8 5 8 2.
+ <_>
+
+ <_>
+ 10 0 9 6 -1.
+ <_>
+ 13 0 3 6 3.
+ <_>
+
+ <_>
+ 4 3 16 4 -1.
+ <_>
+ 12 3 8 4 2.
+ <_>
+
+ <_>
+ 10 0 9 6 -1.
+ <_>
+ 13 0 3 6 3.
+ <_>
+
+ <_>
+ 1 1 20 4 -1.
+ <_>
+ 1 1 10 2 2.
+ <_>
+ 11 3 10 2 2.
+ <_>
+
+ <_>
+ 10 0 9 6 -1.
+ <_>
+ 13 0 3 6 3.
+ <_>
+
+ <_>
+ 5 0 9 6 -1.
+ <_>
+ 8 0 3 6 3.
+ <_>
+
+ <_>
+ 8 18 10 6 -1.
+ <_>
+ 8 20 10 2 3.
+ <_>
+
+ <_>
+ 6 3 6 9 -1.
+ <_>
+ 8 3 2 9 3.
+ <_>
+
+ <_>
+ 7 3 12 6 -1.
+ <_>
+ 7 5 12 2 3.
+ <_>
+
+ <_>
+ 0 10 18 3 -1.
+ <_>
+ 0 11 18 1 3.
+ <_>
+
+ <_>
+ 1 10 22 3 -1.
+ <_>
+ 1 11 22 1 3.
+ <_>
+
+ <_>
+ 5 11 8 8 -1.
+ <_>
+ 9 11 4 8 2.
+ <_>
+
+ <_>
+ 12 11 6 6 -1.
+ <_>
+ 12 11 3 6 2.
+ <_>
+
+ <_>
+ 6 11 6 6 -1.
+ <_>
+ 9 11 3 6 2.
+ <_>
+
+ <_>
+ 7 10 11 6 -1.
+ <_>
+ 7 12 11 2 3.
+ <_>
+
+ <_>
+ 0 13 24 4 -1.
+ <_>
+ 0 13 12 2 2.
+ <_>
+ 12 15 12 2 2.
+ <_>
+
+ <_>
+ 2 4 22 12 -1.
+ <_>
+ 13 4 11 6 2.
+ <_>
+ 2 10 11 6 2.
+ <_>
+
+ <_>
+ 2 0 20 17 -1.
+ <_>
+ 12 0 10 17 2.
+ <_>
+
+ <_>
+ 14 0 2 24 -1.
+ <_>
+ 14 0 1 24 2.
+ <_>
+
+ <_>
+ 8 0 2 24 -1.
+ <_>
+ 9 0 1 24 2.
+ <_>
+
+ <_>
+ 14 1 2 22 -1.
+ <_>
+ 14 1 1 22 2.
+ <_>
+
+ <_>
+ 8 1 2 22 -1.
+ <_>
+ 9 1 1 22 2.
+ <_>
+
+ <_>
+ 17 6 3 18 -1.
+ <_>
+ 18 6 1 18 3.
+ <_>
+
+ <_>
+ 6 14 9 6 -1.
+ <_>
+ 6 16 9 2 3.
+ <_>
+
+ <_>
+ 13 14 9 4 -1.
+ <_>
+ 13 16 9 2 2.
+ <_>
+
+ <_>
+ 3 18 18 3 -1.
+ <_>
+ 3 19 18 1 3.
+ <_>
+
+ <_>
+ 9 4 8 18 -1.
+ <_>
+ 13 4 4 9 2.
+ <_>
+ 9 13 4 9 2.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 0 2 12 4 -1.
+ <_>
+ 6 2 6 4 2.
+ <_>
+
+ <_>
+ 6 8 14 6 -1.
+ <_>
+ 6 11 14 3 2.
+ <_>
+
+ <_>
+ 7 5 6 6 -1.
+ <_>
+ 10 5 3 6 2.
+ <_>
+
+ <_>
+ 10 5 6 16 -1.
+ <_>
+ 10 13 6 8 2.
+ <_>
+
+ <_>
+ 1 4 9 16 -1.
+ <_>
+ 4 4 3 16 3.
+ <_>
+
+ <_>
+ 5 0 18 9 -1.
+ <_>
+ 5 3 18 3 3.
+ <_>
+
+ <_>
+ 9 15 5 8 -1.
+ <_>
+ 9 19 5 4 2.
+ <_>
+
+ <_>
+ 20 0 4 9 -1.
+ <_>
+ 20 0 2 9 2.
+ <_>
+
+ <_>
+ 2 0 18 3 -1.
+ <_>
+ 2 1 18 1 3.
+ <_>
+
+ <_>
+ 5 22 19 2 -1.
+ <_>
+ 5 23 19 1 2.
+ <_>
+
+ <_>
+ 0 0 4 9 -1.
+ <_>
+ 2 0 2 9 2.
+ <_>
+
+ <_>
+ 5 6 19 18 -1.
+ <_>
+ 5 12 19 6 3.
+ <_>
+
+ <_>
+ 0 1 6 9 -1.
+ <_>
+ 2 1 2 9 3.
+ <_>
+
+ <_>
+ 6 5 14 12 -1.
+ <_>
+ 13 5 7 6 2.
+ <_>
+ 6 11 7 6 2.
+ <_>
+
+ <_>
+ 0 1 20 2 -1.
+ <_>
+ 0 2 20 1 2.
+ <_>
+
+ <_>
+ 1 2 22 3 -1.
+ <_>
+ 1 3 22 1 3.
+ <_>
+
+ <_>
+ 2 8 7 9 -1.
+ <_>
+ 2 11 7 3 3.
+ <_>
+
+ <_>
+ 2 12 22 4 -1.
+ <_>
+ 13 12 11 2 2.
+ <_>
+ 2 14 11 2 2.
+ <_>
+
+ <_>
+ 0 12 22 4 -1.
+ <_>
+ 0 12 11 2 2.
+ <_>
+ 11 14 11 2 2.
+ <_>
+
+ <_>
+ 9 7 6 11 -1.
+ <_>
+ 11 7 2 11 3.
+ <_>
+
+ <_>
+ 7 1 9 6 -1.
+ <_>
+ 10 1 3 6 3.
+ <_>
+
+ <_>
+ 11 2 4 10 -1.
+ <_>
+ 11 7 4 5 2.
+ <_>
+
+ <_>
+ 6 4 12 12 -1.
+ <_>
+ 6 10 12 6 2.
+ <_>
+
+ <_>
+ 18 1 6 15 -1.
+ <_>
+ 18 6 6 5 3.
+ <_>
+
+ <_>
+ 3 15 18 3 -1.
+ <_>
+ 3 16 18 1 3.
+ <_>
+
+ <_>
+ 18 5 6 9 -1.
+ <_>
+ 18 8 6 3 3.
+ <_>
+
+ <_>
+ 1 5 16 6 -1.
+ <_>
+ 1 5 8 3 2.
+ <_>
+ 9 8 8 3 2.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 0 4 24 14 -1.
+ <_>
+ 0 4 12 7 2.
+ <_>
+ 12 11 12 7 2.
+ <_>
+
+ <_>
+ 13 0 4 13 -1.
+ <_>
+ 13 0 2 13 2.
+ <_>
+
+ <_>
+ 7 0 4 13 -1.
+ <_>
+ 9 0 2 13 2.
+ <_>
+
+ <_>
+ 11 6 6 9 -1.
+ <_>
+ 13 6 2 9 3.
+ <_>
+
+ <_>
+ 8 7 6 9 -1.
+ <_>
+ 10 7 2 9 3.
+ <_>
+
+ <_>
+ 13 17 9 6 -1.
+ <_>
+ 13 19 9 2 3.
+ <_>
+
+ <_>
+ 2 18 14 6 -1.
+ <_>
+ 2 18 7 3 2.
+ <_>
+ 9 21 7 3 2.
+ <_>
+
+ <_>
+ 3 18 18 4 -1.
+ <_>
+ 12 18 9 2 2.
+ <_>
+ 3 20 9 2 2.
+ <_>
+
+ <_>
+ 0 20 15 4 -1.
+ <_>
+ 5 20 5 4 3.
+ <_>
+
+ <_>
+ 9 15 15 9 -1.
+ <_>
+ 14 15 5 9 3.
+ <_>
+
+ <_>
+ 4 4 16 4 -1.
+ <_>
+ 4 6 16 2 2.
+ <_>
+
+ <_>
+ 7 6 10 6 -1.
+ <_>
+ 7 8 10 2 3.
+ <_>
+
+ <_>
+ 0 14 15 10 -1.
+ <_>
+ 5 14 5 10 3.
+ <_>
+
+ <_>
+ 7 9 10 14 -1.
+ <_>
+ 12 9 5 7 2.
+ <_>
+ 7 16 5 7 2.
+ <_>
+
+ <_>
+ 7 6 6 9 -1.
+ <_>
+ 9 6 2 9 3.
+ <_>
+
+ <_>
+ 3 6 18 3 -1.
+ <_>
+ 3 7 18 1 3.
+ <_>
+
+ <_>
+ 0 10 18 3 -1.
+ <_>
+ 0 11 18 1 3.
+ <_>
+
+ <_>
+ 3 16 18 4 -1.
+ <_>
+ 12 16 9 2 2.
+ <_>
+ 3 18 9 2 2.
+ <_>
+
+ <_>
+ 4 6 14 6 -1.
+ <_>
+ 4 6 7 3 2.
+ <_>
+ 11 9 7 3 2.
+ <_>
+
+ <_>
+ 13 0 2 18 -1.
+ <_>
+ 13 0 1 18 2.
+ <_>
+
+ <_>
+ 9 0 2 18 -1.
+ <_>
+ 10 0 1 18 2.
+ <_>
+
+ <_>
+ 5 7 15 10 -1.
+ <_>
+ 10 7 5 10 3.
+ <_>
+
+ <_>
+ 1 20 21 4 -1.
+ <_>
+ 8 20 7 4 3.
+ <_>
+
+ <_>
+ 10 5 5 18 -1.
+ <_>
+ 10 14 5 9 2.
+ <_>
+
+ <_>
+ 0 2 24 6 -1.
+ <_>
+ 0 2 12 3 2.
+ <_>
+ 12 5 12 3 2.
+ <_>
+
+ <_>
+ 1 1 22 8 -1.
+ <_>
+ 12 1 11 4 2.
+ <_>
+ 1 5 11 4 2.
+ <_>
+
+ <_>
+ 4 0 15 9 -1.
+ <_>
+ 4 3 15 3 3.
+ <_>
+
+ <_>
+ 0 0 24 19 -1.
+ <_>
+ 8 0 8 19 3.
+ <_>
+
+ <_>
+ 2 21 18 3 -1.
+ <_>
+ 11 21 9 3 2.
+ <_>
+
+ <_>
+ 9 7 10 4 -1.
+ <_>
+ 9 7 5 4 2.
+ <_>
+
+ <_>
+ 5 7 10 4 -1.
+ <_>
+ 10 7 5 4 2.
+ <_>
+
+ <_>
+ 17 8 6 16 -1.
+ <_>
+ 20 8 3 8 2.
+ <_>
+ 17 16 3 8 2.
+ <_>
+
+ <_>
+ 1 15 20 4 -1.
+ <_>
+ 1 15 10 2 2.
+ <_>
+ 11 17 10 2 2.
+ <_>
+
+ <_>
+ 14 15 10 6 -1.
+ <_>
+ 14 17 10 2 3.
+ <_>
+
+ <_>
+ 3 0 16 9 -1.
+ <_>
+ 3 3 16 3 3.
+ <_>
+
+ <_>
+ 15 6 7 15 -1.
+ <_>
+ 15 11 7 5 3.
+ <_>
+
+ <_>
+ 9 1 6 13 -1.
+ <_>
+ 11 1 2 13 3.
+ <_>
+
+ <_>
+ 17 2 6 14 -1.
+ <_>
+ 17 2 3 14 2.
+ <_>
+
+ <_>
+ 3 14 12 10 -1.
+ <_>
+ 3 14 6 5 2.
+ <_>
+ 9 19 6 5 2.
+ <_>
+
+ <_>
+ 7 6 10 6 -1.
+ <_>
+ 7 8 10 2 3.
+ <_>
+
+ <_>
+ 1 2 6 14 -1.
+ <_>
+ 4 2 3 14 2.
+ <_>
+
+ <_>
+ 10 4 5 12 -1.
+ <_>
+ 10 8 5 4 3.
+ <_>
+
+ <_>
+ 0 17 24 5 -1.
+ <_>
+ 8 17 8 5 3.
+ <_>
+
+ <_>
+ 15 7 5 12 -1.
+ <_>
+ 15 11 5 4 3.
+ <_>
+
+ <_>
+ 3 1 6 12 -1.
+ <_>
+ 3 1 3 6 2.
+ <_>
+ 6 7 3 6 2.
+ <_>
+
+ <_>
+ 12 13 6 6 -1.
+ <_>
+ 12 16 6 3 2.
+ <_>
+
+ <_>
+ 6 13 6 6 -1.
+ <_>
+ 6 16 6 3 2.
+ <_>
+
+ <_>
+ 14 6 3 16 -1.
+ <_>
+ 14 14 3 8 2.
+ <_>
+
+ <_>
+ 1 12 13 6 -1.
+ <_>
+ 1 14 13 2 3.
+ <_>
+
+ <_>
+ 13 1 4 9 -1.
+ <_>
+ 13 1 2 9 2.
+ <_>
+
+ <_>
+ 7 0 9 6 -1.
+ <_>
+ 10 0 3 6 3.
+ <_>
+
+ <_>
+ 12 2 6 9 -1.
+ <_>
+ 12 2 3 9 2.
+ <_>
+
+ <_>
+ 6 2 6 9 -1.
+ <_>
+ 9 2 3 9 2.
+ <_>
+
+ <_>
+ 6 18 12 6 -1.
+ <_>
+ 6 20 12 2 3.
+ <_>
+
+ <_>
+ 7 6 6 9 -1.
+ <_>
+ 9 6 2 9 3.
+ <_>
+
+ <_>
+ 7 7 12 3 -1.
+ <_>
+ 7 7 6 3 2.
+ <_>
+
+ <_>
+ 8 3 8 21 -1.
+ <_>
+ 8 10 8 7 3.
+ <_>
+
+ <_>
+ 7 4 10 12 -1.
+ <_>
+ 7 8 10 4 3.
+ <_>
+
+ <_>
+ 0 1 6 9 -1.
+ <_>
+ 0 4 6 3 3.
+ <_>
+
+ <_>
+ 15 2 2 20 -1.
+ <_>
+ 15 2 1 20 2.
+ <_>
+
+ <_>
+ 0 3 6 9 -1.
+ <_>
+ 0 6 6 3 3.
+ <_>
+
+ <_>
+ 15 3 2 21 -1.
+ <_>
+ 15 3 1 21 2.
+ <_>
+
+ <_>
+ 7 0 2 23 -1.
+ <_>
+ 8 0 1 23 2.
+ <_>
+
+ <_>
+ 15 8 9 4 -1.
+ <_>
+ 15 10 9 2 2.
+ <_>
+
+ <_>
+ 0 8 9 4 -1.
+ <_>
+ 0 10 9 2 2.
+ <_>
+
+ <_>
+ 8 14 9 6 -1.
+ <_>
+ 8 16 9 2 3.
+ <_>
+
+ <_>
+ 0 14 9 6 -1.
+ <_>
+ 0 16 9 2 3.
+ <_>
+
+ <_>
+ 3 10 18 4 -1.
+ <_>
+ 9 10 6 4 3.
+ <_>
+
+ <_>
+ 0 0 24 19 -1.
+ <_>
+ 8 0 8 19 3.
+ <_>
+
+ <_>
+ 9 1 8 12 -1.
+ <_>
+ 9 7 8 6 2.
+ <_>
+
+ <_>
+ 10 6 4 10 -1.
+ <_>
+ 12 6 2 10 2.
+ <_>
+
+ <_>
+ 7 9 10 12 -1.
+ <_>
+ 12 9 5 6 2.
+ <_>
+ 7 15 5 6 2.
+ <_>
+
+ <_>
+ 5 0 3 19 -1.
+ <_>
+ 6 0 1 19 3.
+ <_>
+
+ <_>
+ 14 0 6 10 -1.
+ <_>
+ 16 0 2 10 3.
+ <_>
+
+ <_>
+ 2 0 6 12 -1.
+ <_>
+ 2 0 3 6 2.
+ <_>
+ 5 6 3 6 2.
+ <_>
+
+ <_>
+ 0 11 24 2 -1.
+ <_>
+ 0 12 24 1 2.
+ <_>
+
+ <_>
+ 4 9 13 4 -1.
+ <_>
+ 4 11 13 2 2.
+ <_>
+
+ <_>
+ 9 8 6 9 -1.
+ <_>
+ 9 11 6 3 3.
+ <_>
+
+ <_>
+ 0 12 16 4 -1.
+ <_>
+ 0 14 16 2 2.
+ <_>
+
+ <_>
+ 18 12 6 9 -1.
+ <_>
+ 18 15 6 3 3.
+ <_>
+
+ <_>
+ 0 12 6 9 -1.
+ <_>
+ 0 15 6 3 3.
+ <_>
+
+ <_>
+ 8 7 10 4 -1.
+ <_>
+ 8 7 5 4 2.
+ <_>
+
+ <_>
+ 8 7 6 9 -1.
+ <_>
+ 10 7 2 9 3.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 12 3 6 15 -1.
+ <_>
+ 14 3 2 15 3.
+ <_>
+
+ <_>
+ 6 3 6 15 -1.
+ <_>
+ 8 3 2 15 3.
+ <_>
+
+ <_>
+ 15 2 9 4 -1.
+ <_>
+ 15 4 9 2 2.
+ <_>
+
+ <_>
+ 5 10 6 7 -1.
+ <_>
+ 8 10 3 7 2.
+ <_>
+
+ <_>
+ 9 14 6 10 -1.
+ <_>
+ 9 19 6 5 2.
+ <_>
+
+ <_>
+ 7 13 5 8 -1.
+ <_>
+ 7 17 5 4 2.
+ <_>
+
+ <_>
+ 14 5 3 16 -1.
+ <_>
+ 14 13 3 8 2.
+ <_>
+
+ <_>
+ 2 17 18 3 -1.
+ <_>
+ 2 18 18 1 3.
+ <_>
+
+ <_>
+ 5 18 19 3 -1.
+ <_>
+ 5 19 19 1 3.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 11 0 2 9 3.
+ <_>
+
+ <_>
+ 12 4 3 18 -1.
+ <_>
+ 13 4 1 18 3.
+ <_>
+
+ <_>
+ 9 4 3 18 -1.
+ <_>
+ 10 4 1 18 3.
+ <_>
+
+ <_>
+ 3 3 18 9 -1.
+ <_>
+ 9 3 6 9 3.
+ <_>
+
+ <_>
+ 6 1 6 14 -1.
+ <_>
+ 8 1 2 14 3.
+ <_>
+
+ <_>
+ 12 16 9 6 -1.
+ <_>
+ 12 19 9 3 2.
+ <_>
+
+ <_>
+ 1 3 20 16 -1.
+ <_>
+ 1 3 10 8 2.
+ <_>
+ 11 11 10 8 2.
+ <_>
+
+ <_>
+ 12 5 6 12 -1.
+ <_>
+ 15 5 3 6 2.
+ <_>
+ 12 11 3 6 2.
+ <_>
+
+ <_>
+ 1 2 22 16 -1.
+ <_>
+ 1 2 11 8 2.
+ <_>
+ 12 10 11 8 2.
+ <_>
+
+ <_>
+ 10 14 5 10 -1.
+ <_>
+ 10 19 5 5 2.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 3 22 18 1 3.
+ <_>
+
+ <_>
+ 10 14 6 10 -1.
+ <_>
+ 12 14 2 10 3.
+ <_>
+
+ <_>
+ 0 2 24 4 -1.
+ <_>
+ 8 2 8 4 3.
+ <_>
+
+ <_>
+ 6 4 12 9 -1.
+ <_>
+ 6 7 12 3 3.
+ <_>
+
+ <_>
+ 6 6 12 5 -1.
+ <_>
+ 10 6 4 5 3.
+ <_>
+
+ <_>
+ 5 8 14 12 -1.
+ <_>
+ 5 12 14 4 3.
+ <_>
+
+ <_>
+ 4 14 8 10 -1.
+ <_>
+ 4 14 4 5 2.
+ <_>
+ 8 19 4 5 2.
+ <_>
+
+ <_>
+ 11 6 5 14 -1.
+ <_>
+ 11 13 5 7 2.
+ <_>
+
+ <_>
+ 7 6 3 16 -1.
+ <_>
+ 7 14 3 8 2.
+ <_>
+
+ <_>
+ 3 7 18 8 -1.
+ <_>
+ 9 7 6 8 3.
+ <_>
+
+ <_>
+ 2 3 20 2 -1.
+ <_>
+ 2 4 20 1 2.
+ <_>
+
+ <_>
+ 3 12 19 6 -1.
+ <_>
+ 3 14 19 2 3.
+ <_>
+
+ <_>
+ 8 6 6 9 -1.
+ <_>
+ 10 6 2 9 3.
+ <_>
+
+ <_>
+ 16 6 6 14 -1.
+ <_>
+ 16 6 3 14 2.
+ <_>
+
+ <_>
+ 7 9 6 12 -1.
+ <_>
+ 9 9 2 12 3.
+ <_>
+
+ <_>
+ 18 6 6 18 -1.
+ <_>
+ 21 6 3 9 2.
+ <_>
+ 18 15 3 9 2.
+ <_>
+
+ <_>
+ 0 6 6 18 -1.
+ <_>
+ 0 6 3 9 2.
+ <_>
+ 3 15 3 9 2.
+ <_>
+
+ <_>
+ 18 2 6 9 -1.
+ <_>
+ 18 5 6 3 3.
+ <_>
+
+ <_>
+ 3 18 15 6 -1.
+ <_>
+ 3 20 15 2 3.
+ <_>
+
+ <_>
+ 18 2 6 9 -1.
+ <_>
+ 18 5 6 3 3.
+ <_>
+
+ <_>
+ 0 2 6 9 -1.
+ <_>
+ 0 5 6 3 3.
+ <_>
+
+ <_>
+ 5 10 18 2 -1.
+ <_>
+ 5 11 18 1 2.
+ <_>
+
+ <_>
+ 6 0 12 6 -1.
+ <_>
+ 6 2 12 2 3.
+ <_>
+
+ <_>
+ 10 0 6 9 -1.
+ <_>
+ 12 0 2 9 3.
+ <_>
+
+ <_>
+ 8 0 6 9 -1.
+ <_>
+ 10 0 2 9 3.
+ <_>
+
+ <_>
+ 15 12 9 6 -1.
+ <_>
+ 15 14 9 2 3.
+ <_>
+
+ <_>
+ 3 6 13 6 -1.
+ <_>
+ 3 8 13 2 3.
+ <_>
+
+ <_>
+ 15 12 9 6 -1.
+ <_>
+ 15 14 9 2 3.
+ <_>
+
+ <_>
+ 2 5 6 15 -1.
+ <_>
+ 5 5 3 15 2.
+ <_>
+
+ <_>
+ 8 8 9 6 -1.
+ <_>
+ 11 8 3 6 3.
+ <_>
+
+ <_>
+ 8 6 3 14 -1.
+ <_>
+ 8 13 3 7 2.
+ <_>
+
+ <_>
+ 15 12 9 6 -1.
+ <_>
+ 15 14 9 2 3.
+ <_>
+
+ <_>
+ 4 12 10 4 -1.
+ <_>
+ 9 12 5 4 2.
+ <_>
+
+ <_>
+ 13 1 4 19 -1.
+ <_>
+ 13 1 2 19 2.
+ <_>
+
+ <_>
+ 7 1 4 19 -1.
+ <_>
+ 9 1 2 19 2.
+ <_>
+
+ <_>
+ 18 9 6 9 -1.
+ <_>
+ 18 12 6 3 3.
+ <_>
+
+ <_>
+ 1 21 18 3 -1.
+ <_>
+ 1 22 18 1 3.
+ <_>
+
+ <_>
+ 14 13 10 9 -1.
+ <_>
+ 14 16 10 3 3.
+ <_>
+
+ <_>
+ 1 13 22 4 -1.
+ <_>
+ 1 13 11 2 2.
+ <_>
+ 12 15 11 2 2.
+ <_>
+
+ <_>
+ 4 6 16 6 -1.
+ <_>
+ 12 6 8 3 2.
+ <_>
+ 4 9 8 3 2.
+ <_>
+
+ <_>
+ 1 0 18 22 -1.
+ <_>
+ 1 0 9 11 2.
+ <_>
+ 10 11 9 11 2.
+ <_>
+
+ <_>
+ 10 7 8 14 -1.
+ <_>
+ 14 7 4 7 2.
+ <_>
+ 10 14 4 7 2.
+ <_>
+
+ <_>
+ 0 4 6 20 -1.
+ <_>
+ 0 4 3 10 2.
+ <_>
+ 3 14 3 10 2.
+ <_>
+
+ <_>
+ 15 0 6 9 -1.
+ <_>
+ 17 0 2 9 3.
+ <_>
+
+ <_>
+ 3 0 6 9 -1.
+ <_>
+ 5 0 2 9 3.
+ <_>
+
+ <_>
+ 15 12 6 12 -1.
+ <_>
+ 18 12 3 6 2.
+ <_>
+ 15 18 3 6 2.
+ <_>
+
+ <_>
+ 3 12 6 12 -1.
+ <_>
+ 3 12 3 6 2.
+ <_>
+ 6 18 3 6 2.
+ <_>
+
+ <_>
+ 15 12 9 6 -1.
+ <_>
+ 15 14 9 2 3.
+ <_>
+
+ <_>
+ 0 12 9 6 -1.
+ <_>
+ 0 14 9 2 3.
+ <_>
+
+ <_>
+ 4 14 19 3 -1.
+ <_>
+ 4 15 19 1 3.
+ <_>
+
+ <_>
+ 2 13 19 3 -1.
+ <_>
+ 2 14 19 1 3.
+ <_>
+
+ <_>
+ 14 15 10 6 -1.
+ <_>
+ 14 17 10 2 3.
+ <_>
+
+ <_>
+ 6 0 10 12 -1.
+ <_>
+ 6 0 5 6 2.
+ <_>
+ 11 6 5 6 2.
+ <_>
+
+ <_>
+ 17 1 6 12 -1.
+ <_>
+ 20 1 3 6 2.
+ <_>
+ 17 7 3 6 2.
+ <_>
+
+ <_>
+ 1 1 6 12 -1.
+ <_>
+ 1 1 3 6 2.
+ <_>
+ 4 7 3 6 2.
+ <_>
+
+ <_>
+ 16 14 6 9 -1.
+ <_>
+ 16 17 6 3 3.
+ <_>
+
+ <_>
+ 7 3 9 12 -1.
+ <_>
+ 7 9 9 6 2.
+ <_>
+
+ <_>
+ 12 1 4 12 -1.
+ <_>
+ 12 7 4 6 2.
+ <_>
+
+ <_>
+ 4 0 14 8 -1.
+ <_>
+ 4 4 14 4 2.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 2 10 18 3 -1.
+ <_>
+ 8 10 6 3 3.
+ <_>
+
+ <_>
+ 15 15 9 6 -1.
+ <_>
+ 15 17 9 2 3.
+ <_>
+
+ <_>
+ 0 1 21 23 -1.
+ <_>
+ 7 1 7 23 3.
+ <_>
+
+ <_>
+ 6 9 17 4 -1.
+ <_>
+ 6 11 17 2 2.
+ <_>
+
+ <_>
+ 1 0 11 18 -1.
+ <_>
+ 1 6 11 6 3.
+ <_>
+
+ <_>
+ 6 15 13 6 -1.
+ <_>
+ 6 17 13 2 3.
+ <_>
+
+ <_>
+ 0 15 9 6 -1.
+ <_>
+ 0 17 9 2 3.
+ <_>
+
+ <_>
+ 8 7 15 4 -1.
+ <_>
+ 13 7 5 4 3.
+ <_>
+
+ <_>
+ 9 12 6 9 -1.
+ <_>
+ 9 15 6 3 3.
+ <_>
+
+ <_>
+ 6 8 18 3 -1.
+ <_>
+ 12 8 6 3 3.
+ <_>
+
+ <_>
+ 0 14 24 4 -1.
+ <_>
+ 8 14 8 4 3.
+ <_>
+
+ <_>
+ 16 10 3 12 -1.
+ <_>
+ 16 16 3 6 2.
+ <_>
+
+ <_>
+ 0 3 24 3 -1.
+ <_>
+ 0 4 24 1 3.
+ <_>
+
+ <_>
+ 14 17 10 6 -1.
+ <_>
+ 14 19 10 2 3.
+ <_>
+
+ <_>
+ 1 13 18 3 -1.
+ <_>
+ 7 13 6 3 3.
+ <_>
+
+ <_>
+ 5 0 18 9 -1.
+ <_>
+ 5 3 18 3 3.
+ <_>
+
+ <_>
+ 4 3 16 9 -1.
+ <_>
+ 4 6 16 3 3.
+ <_>
+
+ <_>
+ 16 5 3 12 -1.
+ <_>
+ 16 11 3 6 2.
+ <_>
+
+ <_>
+ 0 7 18 4 -1.
+ <_>
+ 6 7 6 4 3.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 9 8 6 10 -1.
+ <_>
+ 11 8 2 10 3.
+ <_>
+
+ <_>
+ 9 15 6 9 -1.
+ <_>
+ 11 15 2 9 3.
+ <_>
+
+ <_>
+ 3 1 18 21 -1.
+ <_>
+ 12 1 9 21 2.
+ <_>
+
+ <_>
+ 6 8 12 7 -1.
+ <_>
+ 6 8 6 7 2.
+ <_>
+
+ <_>
+ 8 5 6 9 -1.
+ <_>
+ 10 5 2 9 3.
+ <_>
+
+ <_>
+ 0 2 24 4 -1.
+ <_>
+ 8 2 8 4 3.
+ <_>
+
+ <_>
+ 14 7 5 12 -1.
+ <_>
+ 14 11 5 4 3.
+ <_>
+
+ <_>
+ 5 7 5 12 -1.
+ <_>
+ 5 11 5 4 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 0 1 6 17 -1.
+ <_>
+ 3 1 3 17 2.
+ <_>
+
+ <_>
+ 3 1 19 9 -1.
+ <_>
+ 3 4 19 3 3.
+ <_>
+
+ <_>
+ 3 18 12 6 -1.
+ <_>
+ 3 18 6 3 2.
+ <_>
+ 9 21 6 3 2.
+ <_>
+
+ <_>
+ 20 4 4 19 -1.
+ <_>
+ 20 4 2 19 2.
+ <_>
+
+ <_>
+ 0 16 10 7 -1.
+ <_>
+ 5 16 5 7 2.
+ <_>
+
+ <_>
+ 8 7 10 12 -1.
+ <_>
+ 13 7 5 6 2.
+ <_>
+ 8 13 5 6 2.
+ <_>
+
+ <_>
+ 6 7 10 12 -1.
+ <_>
+ 6 7 5 6 2.
+ <_>
+ 11 13 5 6 2.
+ <_>
+
+ <_>
+ 9 2 9 6 -1.
+ <_>
+ 12 2 3 6 3.
+ <_>
+
+ <_>
+ 1 20 21 4 -1.
+ <_>
+ 8 20 7 4 3.
+ <_>
+
+ <_>
+ 9 12 9 6 -1.
+ <_>
+ 9 14 9 2 3.
+ <_>
+
+ <_>
+ 7 2 9 6 -1.
+ <_>
+ 10 2 3 6 3.
+ <_>
+
+ <_>
+ 13 0 4 14 -1.
+ <_>
+ 13 0 2 14 2.
+ <_>
+
+ <_>
+ 7 0 4 14 -1.
+ <_>
+ 9 0 2 14 2.
+ <_>
+
+ <_>
+ 14 15 9 6 -1.
+ <_>
+ 14 17 9 2 3.
+ <_>
+
+ <_>
+ 2 8 18 5 -1.
+ <_>
+ 8 8 6 5 3.
+ <_>
+
+ <_>
+ 18 3 6 11 -1.
+ <_>
+ 20 3 2 11 3.
+ <_>
+
+ <_>
+ 6 5 11 14 -1.
+ <_>
+ 6 12 11 7 2.
+ <_>
+
+ <_>
+ 18 4 6 9 -1.
+ <_>
+ 18 7 6 3 3.
+ <_>
+
+ <_>
+ 7 6 9 6 -1.
+ <_>
+ 7 8 9 2 3.
+ <_>
+
+ <_>
+ 18 4 6 9 -1.
+ <_>
+ 18 7 6 3 3.
+ <_>
+
+ <_>
+ 0 4 6 9 -1.
+ <_>
+ 0 7 6 3 3.
+ <_>
+
+ <_>
+ 9 4 9 4 -1.
+ <_>
+ 9 6 9 2 2.
+ <_>
+
+ <_>
+ 0 22 19 2 -1.
+ <_>
+ 0 23 19 1 2.
+ <_>
+
+ <_>
+ 17 14 6 9 -1.
+ <_>
+ 17 17 6 3 3.
+ <_>
+
+ <_>
+ 1 14 6 9 -1.
+ <_>
+ 1 17 6 3 3.
+ <_>
+
+ <_>
+ 14 11 4 9 -1.
+ <_>
+ 14 11 2 9 2.
+ <_>
+
+ <_>
+ 6 11 4 9 -1.
+ <_>
+ 8 11 2 9 2.
+ <_>
+
+ <_>
+ 3 9 18 7 -1.
+ <_>
+ 9 9 6 7 3.
+ <_>
+
+ <_>
+ 9 12 6 10 -1.
+ <_>
+ 9 17 6 5 2.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 6 0 6 9 -1.
+ <_>
+ 8 0 2 9 3.
+ <_>
+
+ <_>
+ 6 17 18 3 -1.
+ <_>
+ 6 18 18 1 3.
+ <_>
+
+ <_>
+ 1 17 18 3 -1.
+ <_>
+ 1 18 18 1 3.
+ <_>
+
+ <_>
+ 10 6 11 12 -1.
+ <_>
+ 10 12 11 6 2.
+ <_>
+
+ <_>
+ 5 6 14 6 -1.
+ <_>
+ 5 6 7 3 2.
+ <_>
+ 12 9 7 3 2.
+ <_>
+
+ <_>
+ 5 4 15 4 -1.
+ <_>
+ 5 6 15 2 2.
+ <_>
+
+ <_>
+ 0 0 22 2 -1.
+ <_>
+ 0 1 22 1 2.
+ <_>
+
+ <_>
+ 0 0 24 24 -1.
+ <_>
+ 8 0 8 24 3.
+ <_>
+
+ <_>
+ 1 15 18 4 -1.
+ <_>
+ 10 15 9 4 2.
+ <_>
+
+ <_>
+ 6 8 12 9 -1.
+ <_>
+ 6 11 12 3 3.
+ <_>
+
+ <_>
+ 4 12 7 12 -1.
+ <_>
+ 4 16 7 4 3.
+ <_>
+
+ <_>
+ 1 2 22 6 -1.
+ <_>
+ 12 2 11 3 2.
+ <_>
+ 1 5 11 3 2.
+ <_>
+
+ <_>
+ 5 20 14 3 -1.
+ <_>
+ 12 20 7 3 2.
+ <_>
+
+ <_>
+ 0 0 24 16 -1.
+ <_>
+ 12 0 12 8 2.
+ <_>
+ 0 8 12 8 2.
+ <_>
+
+ <_>
+ 3 13 18 4 -1.
+ <_>
+ 3 13 9 2 2.
+ <_>
+ 12 15 9 2 2.
+ <_>
+
+ <_>
+ 2 10 22 2 -1.
+ <_>
+ 2 11 22 1 2.
+ <_>
+
+ <_>
+ 6 3 11 8 -1.
+ <_>
+ 6 7 11 4 2.
+ <_>
+
+ <_>
+ 14 5 6 6 -1.
+ <_>
+ 14 8 6 3 2.
+ <_>
+
+ <_>
+ 0 7 24 6 -1.
+ <_>
+ 0 9 24 2 3.
+ <_>
+
+ <_>
+ 14 0 10 10 -1.
+ <_>
+ 19 0 5 5 2.
+ <_>
+ 14 5 5 5 2.
+ <_>
+
+ <_>
+ 0 0 10 10 -1.
+ <_>
+ 0 0 5 5 2.
+ <_>
+ 5 5 5 5 2.
+ <_>
+
+ <_>
+ 0 1 24 4 -1.
+ <_>
+ 12 1 12 2 2.
+ <_>
+ 0 3 12 2 2.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 5 15 16 6 -1.
+ <_>
+ 13 15 8 3 2.
+ <_>
+ 5 18 8 3 2.
+ <_>
+
+ <_>
+ 3 15 16 6 -1.
+ <_>
+ 3 15 8 3 2.
+ <_>
+ 11 18 8 3 2.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 0 13 21 10 -1.
+ <_>
+ 0 18 21 5 2.
+ <_>
+
+ <_>
+ 13 0 6 24 -1.
+ <_>
+ 15 0 2 24 3.
+ <_>
+
+ <_>
+ 7 4 6 11 -1.
+ <_>
+ 9 4 2 11 3.
+ <_>
+
+ <_>
+ 9 5 9 6 -1.
+ <_>
+ 12 5 3 6 3.
+ <_>
+
+ <_>
+ 1 4 2 20 -1.
+ <_>
+ 1 14 2 10 2.
+ <_>
+
+ <_>
+ 13 0 6 24 -1.
+ <_>
+ 15 0 2 24 3.
+ <_>
+
+ <_>
+ 5 0 6 24 -1.
+ <_>
+ 7 0 2 24 3.
+ <_>
+
+ <_>
+ 16 7 6 14 -1.
+ <_>
+ 19 7 3 7 2.
+ <_>
+ 16 14 3 7 2.
+ <_>
+
+ <_>
+ 4 7 4 12 -1.
+ <_>
+ 6 7 2 12 2.
+ <_>
+
+ <_>
+ 0 5 24 14 -1.
+ <_>
+ 8 5 8 14 3.
+ <_>
+
+ <_>
+ 5 13 10 6 -1.
+ <_>
+ 5 15 10 2 3.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 2 7 6 14 -1.
+ <_>
+ 2 7 3 7 2.
+ <_>
+ 5 14 3 7 2.
+ <_>
+
+ <_>
+ 15 2 9 15 -1.
+ <_>
+ 18 2 3 15 3.
+ <_>
+
+ <_>
+ 0 2 6 9 -1.
+ <_>
+ 2 2 2 9 3.
+ <_>
+
+ <_>
+ 12 2 10 14 -1.
+ <_>
+ 17 2 5 7 2.
+ <_>
+ 12 9 5 7 2.
+ <_>
+
+ <_>
+ 11 6 2 18 -1.
+ <_>
+ 12 6 1 18 2.
+ <_>
+
+ <_>
+ 9 5 15 6 -1.
+ <_>
+ 14 5 5 6 3.
+ <_>
+
+ <_>
+ 8 6 6 10 -1.
+ <_>
+ 10 6 2 10 3.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 3 3 9 7 -1.
+ <_>
+ 6 3 3 7 3.
+ <_>
+
+ <_>
+ 6 7 14 3 -1.
+ <_>
+ 6 7 7 3 2.
+ <_>
+
+ <_>
+ 7 7 8 6 -1.
+ <_>
+ 11 7 4 6 2.
+ <_>
+
+ <_>
+ 12 7 7 12 -1.
+ <_>
+ 12 13 7 6 2.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 10 6 2 9 2.
+ <_>
+ 12 15 2 9 2.
+ <_>
+
+ <_>
+ 16 14 6 9 -1.
+ <_>
+ 16 17 6 3 3.
+ <_>
+
+ <_>
+ 4 0 6 13 -1.
+ <_>
+ 6 0 2 13 3.
+ <_>
+
+ <_>
+ 2 2 21 3 -1.
+ <_>
+ 9 2 7 3 3.
+ <_>
+
+ <_>
+ 5 4 5 12 -1.
+ <_>
+ 5 8 5 4 3.
+ <_>
+
+ <_>
+ 10 3 4 10 -1.
+ <_>
+ 10 8 4 5 2.
+ <_>
+
+ <_>
+ 8 4 5 8 -1.
+ <_>
+ 8 8 5 4 2.
+ <_>
+
+ <_>
+ 6 0 11 9 -1.
+ <_>
+ 6 3 11 3 3.
+ <_>
+
+ <_>
+ 6 6 12 5 -1.
+ <_>
+ 10 6 4 5 3.
+ <_>
+
+ <_>
+ 0 0 24 5 -1.
+ <_>
+ 8 0 8 5 3.
+ <_>
+
+ <_>
+ 1 10 23 6 -1.
+ <_>
+ 1 12 23 2 3.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 9 21 6 3 3.
+ <_>
+
+ <_>
+ 3 6 21 6 -1.
+ <_>
+ 3 8 21 2 3.
+ <_>
+
+ <_>
+ 0 5 6 12 -1.
+ <_>
+ 2 5 2 12 3.
+ <_>
+
+ <_>
+ 10 2 4 15 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 8 7 8 10 -1.
+ <_>
+ 8 12 8 5 2.
+ <_>
+
+ <_>
+ 5 7 15 12 -1.
+ <_>
+ 10 7 5 12 3.
+ <_>
+
+ <_>
+ 0 17 10 6 -1.
+ <_>
+ 0 19 10 2 3.
+ <_>
+
+ <_>
+ 14 18 9 6 -1.
+ <_>
+ 14 20 9 2 3.
+ <_>
+
+ <_>
+ 9 6 6 16 -1.
+ <_>
+ 9 14 6 8 2.
+ <_>
+
+ <_>
+ 14 18 9 6 -1.
+ <_>
+ 14 20 9 2 3.
+ <_>
+
+ <_>
+ 1 18 9 6 -1.
+ <_>
+ 1 20 9 2 3.
+ <_>
+
+ <_>
+ 15 9 9 6 -1.
+ <_>
+ 15 11 9 2 3.
+ <_>
+
+ <_>
+ 0 9 9 6 -1.
+ <_>
+ 0 11 9 2 3.
+ <_>
+
+ <_>
+ 17 3 6 9 -1.
+ <_>
+ 19 3 2 9 3.
+ <_>
+
+ <_>
+ 2 17 18 3 -1.
+ <_>
+ 2 18 18 1 3.
+ <_>
+
+ <_>
+ 3 15 21 6 -1.
+ <_>
+ 3 17 21 2 3.
+ <_>
+
+ <_>
+ 9 17 6 6 -1.
+ <_>
+ 9 20 6 3 2.
+ <_>
+
+ <_>
+ 18 3 6 9 -1.
+ <_>
+ 18 6 6 3 3.
+ <_>
+
+ <_>
+ 0 3 6 9 -1.
+ <_>
+ 0 6 6 3 3.
+ <_>
+
+ <_>
+ 4 0 16 10 -1.
+ <_>
+ 12 0 8 5 2.
+ <_>
+ 4 5 8 5 2.
+ <_>
+
+ <_>
+ 2 0 10 16 -1.
+ <_>
+ 2 0 5 8 2.
+ <_>
+ 7 8 5 8 2.
+ <_>
+
+ <_>
+ 14 0 10 5 -1.
+ <_>
+ 14 0 5 5 2.
+ <_>
+
+ <_>
+ 0 0 10 5 -1.
+ <_>
+ 5 0 5 5 2.
+ <_>
+
+ <_>
+ 18 3 6 10 -1.
+ <_>
+ 18 3 3 10 2.
+ <_>
+
+ <_>
+ 5 11 12 6 -1.
+ <_>
+ 5 11 6 3 2.
+ <_>
+ 11 14 6 3 2.
+ <_>
+
+ <_>
+ 21 0 3 18 -1.
+ <_>
+ 22 0 1 18 3.
+ <_>
+
+ <_>
+ 6 0 6 9 -1.
+ <_>
+ 8 0 2 9 3.
+ <_>
+
+ <_>
+ 8 8 9 7 -1.
+ <_>
+ 11 8 3 7 3.
+ <_>
+
+ <_>
+ 7 12 8 10 -1.
+ <_>
+ 7 12 4 5 2.
+ <_>
+ 11 17 4 5 2.
+ <_>
+
+ <_>
+ 21 0 3 18 -1.
+ <_>
+ 22 0 1 18 3.
+ <_>
+
+ <_>
+ 10 6 4 9 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+
+ <_>
+ 15 0 9 6 -1.
+ <_>
+ 15 2 9 2 3.
+ <_>
+
+ <_>
+ 0 2 24 3 -1.
+ <_>
+ 0 3 24 1 3.
+ <_>
+
+ <_>
+ 11 7 6 9 -1.
+ <_>
+ 13 7 2 9 3.
+ <_>
+
+ <_>
+ 7 6 6 10 -1.
+ <_>
+ 9 6 2 10 3.
+ <_>
+
+ <_>
+ 12 1 6 12 -1.
+ <_>
+ 14 1 2 12 3.
+ <_>
+
+ <_>
+ 6 4 12 12 -1.
+ <_>
+ 6 10 12 6 2.
+ <_>
+
+ <_>
+ 14 3 2 21 -1.
+ <_>
+ 14 3 1 21 2.
+ <_>
+
+ <_>
+ 6 1 12 8 -1.
+ <_>
+ 6 5 12 4 2.
+ <_>
+
+ <_>
+ 3 0 18 8 -1.
+ <_>
+ 3 4 18 4 2.
+ <_>
+
+ <_>
+ 3 0 18 3 -1.
+ <_>
+ 3 1 18 1 3.
+ <_>
+
+ <_>
+ 0 13 24 4 -1.
+ <_>
+ 12 13 12 2 2.
+ <_>
+ 0 15 12 2 2.
+ <_>
+
+ <_>
+ 10 5 4 9 -1.
+ <_>
+ 12 5 2 9 2.
+ <_>
+
+ <_>
+ 11 1 6 9 -1.
+ <_>
+ 13 1 2 9 3.
+ <_>
+
+ <_>
+ 6 2 6 22 -1.
+ <_>
+ 8 2 2 22 3.
+ <_>
+
+ <_>
+ 16 10 8 14 -1.
+ <_>
+ 20 10 4 7 2.
+ <_>
+ 16 17 4 7 2.
+ <_>
+
+ <_>
+ 3 4 16 15 -1.
+ <_>
+ 3 9 16 5 3.
+ <_>
+
+ <_>
+ 16 10 8 14 -1.
+ <_>
+ 20 10 4 7 2.
+ <_>
+ 16 17 4 7 2.
+ <_>
+
+ <_>
+ 0 10 8 14 -1.
+ <_>
+ 0 10 4 7 2.
+ <_>
+ 4 17 4 7 2.
+ <_>
+
+ <_>
+ 10 14 11 6 -1.
+ <_>
+ 10 17 11 3 2.
+ <_>
+
+ <_>
+ 0 7 24 9 -1.
+ <_>
+ 8 7 8 9 3.
+ <_>
+
+ <_>
+ 13 1 4 16 -1.
+ <_>
+ 13 1 2 16 2.
+ <_>
+
+ <_>
+ 7 1 4 16 -1.
+ <_>
+ 9 1 2 16 2.
+ <_>
+
+ <_>
+ 5 5 16 8 -1.
+ <_>
+ 13 5 8 4 2.
+ <_>
+ 5 9 8 4 2.
+ <_>
+
+ <_>
+ 0 9 6 9 -1.
+ <_>
+ 0 12 6 3 3.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 3 12 6 9 -1.
+ <_>
+ 3 15 6 3 3.
+ <_>
+
+ <_>
+ 8 14 9 6 -1.
+ <_>
+ 8 16 9 2 3.
+ <_>
+
+ <_>
+ 2 13 8 10 -1.
+ <_>
+ 2 13 4 5 2.
+ <_>
+ 6 18 4 5 2.
+ <_>
+
+ <_>
+ 15 5 3 18 -1.
+ <_>
+ 15 11 3 6 3.
+ <_>
+
+ <_>
+ 3 5 18 3 -1.
+ <_>
+ 3 6 18 1 3.
+ <_>
+
+ <_>
+ 17 5 6 11 -1.
+ <_>
+ 19 5 2 11 3.
+ <_>
+
+ <_>
+ 1 5 6 11 -1.
+ <_>
+ 3 5 2 11 3.
+ <_>
+
+ <_>
+ 19 1 4 9 -1.
+ <_>
+ 19 1 2 9 2.
+ <_>
+
+ <_>
+ 1 1 4 9 -1.
+ <_>
+ 3 1 2 9 2.
+ <_>
+
+ <_>
+ 4 15 18 9 -1.
+ <_>
+ 4 15 9 9 2.
+ <_>
+
+ <_>
+ 6 9 12 4 -1.
+ <_>
+ 6 11 12 2 2.
+ <_>
+
+ <_>
+ 15 2 9 6 -1.
+ <_>
+ 15 4 9 2 3.
+ <_>
+
+ <_>
+ 0 2 9 6 -1.
+ <_>
+ 0 4 9 2 3.
+ <_>
+
+ <_>
+ 15 0 6 17 -1.
+ <_>
+ 17 0 2 17 3.
+ <_>
+
+ <_>
+ 3 0 6 17 -1.
+ <_>
+ 5 0 2 17 3.
+ <_>
+
+ <_>
+ 8 17 9 4 -1.
+ <_>
+ 8 19 9 2 2.
+ <_>
+
+ <_>
+ 6 5 3 18 -1.
+ <_>
+ 6 11 3 6 3.
+ <_>
+
+ <_>
+ 5 2 14 12 -1.
+ <_>
+ 5 8 14 6 2.
+ <_>
+
+ <_>
+ 10 2 3 12 -1.
+ <_>
+ 10 8 3 6 2.
+ <_>
+
+ <_>
+ 10 7 14 15 -1.
+ <_>
+ 10 12 14 5 3.
+ <_>
+
+ <_>
+ 0 7 14 15 -1.
+ <_>
+ 0 12 14 5 3.
+ <_>
+
+ <_>
+ 15 0 9 6 -1.
+ <_>
+ 15 2 9 2 3.
+ <_>
+
+ <_>
+ 0 0 9 6 -1.
+ <_>
+ 0 2 9 2 3.
+ <_>
+
+ <_>
+ 12 6 6 14 -1.
+ <_>
+ 14 6 2 14 3.
+ <_>
+
+ <_>
+ 9 7 6 9 -1.
+ <_>
+ 11 7 2 9 3.
+ <_>
+
+ <_>
+ 12 6 6 15 -1.
+ <_>
+ 14 6 2 15 3.
+ <_>
+
+ <_>
+ 6 6 6 15 -1.
+ <_>
+ 8 6 2 15 3.
+ <_>
+
+ <_>
+ 15 3 8 9 -1.
+ <_>
+ 15 3 4 9 2.
+ <_>
+
+ <_>
+ 0 0 9 21 -1.
+ <_>
+ 3 0 3 21 3.
+ <_>
+
+ <_>
+ 11 9 8 12 -1.
+ <_>
+ 11 13 8 4 3.
+ <_>
+
+ <_>
+ 6 7 10 12 -1.
+ <_>
+ 6 7 5 6 2.
+ <_>
+ 11 13 5 6 2.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+ 10 15 2 9 2.
+ <_>
+
+ <_>
+ 0 0 6 9 -1.
+ <_>
+ 0 3 6 3 3.
+ <_>
+
+ <_>
+ 3 14 18 3 -1.
+ <_>
+ 3 15 18 1 3.
+ <_>
+
+ <_>
+ 3 14 8 10 -1.
+ <_>
+ 3 14 4 5 2.
+ <_>
+ 7 19 4 5 2.
+ <_>
+
+ <_>
+ 0 12 24 4 -1.
+ <_>
+ 12 12 12 2 2.
+ <_>
+ 0 14 12 2 2.
+ <_>
+
+ <_>
+ 0 2 3 20 -1.
+ <_>
+ 1 2 1 20 3.
+ <_>
+
+ <_>
+ 12 16 10 8 -1.
+ <_>
+ 17 16 5 4 2.
+ <_>
+ 12 20 5 4 2.
+ <_>
+
+ <_>
+ 2 16 10 8 -1.
+ <_>
+ 2 16 5 4 2.
+ <_>
+ 7 20 5 4 2.
+ <_>
+
+ <_>
+ 7 0 10 9 -1.
+ <_>
+ 7 3 10 3 3.
+ <_>
+
+ <_>
+ 0 0 24 3 -1.
+ <_>
+ 8 0 8 3 3.
+ <_>
+
+ <_>
+ 3 8 15 4 -1.
+ <_>
+ 3 10 15 2 2.
+ <_>
+
+ <_>
+ 6 5 12 6 -1.
+ <_>
+ 10 5 4 6 3.
+ <_>
+
+ <_>
+ 5 13 14 6 -1.
+ <_>
+ 5 16 14 3 2.
+ <_>
+
+ <_>
+ 11 14 4 10 -1.
+ <_>
+ 11 19 4 5 2.
+ <_>
+
+ <_>
+ 0 6 6 7 -1.
+ <_>
+ 3 6 3 7 2.
+ <_>
+
+ <_>
+ 18 0 6 6 -1.
+ <_>
+ 18 0 3 6 2.
+ <_>
+
+ <_>
+ 3 1 18 3 -1.
+ <_>
+ 3 2 18 1 3.
+ <_>
+
+ <_>
+ 9 6 14 18 -1.
+ <_>
+ 9 12 14 6 3.
+ <_>
+
+ <_>
+ 0 0 6 6 -1.
+ <_>
+ 3 0 3 6 2.
+ <_>
+
+ <_>
+ 13 11 6 6 -1.
+ <_>
+ 13 11 3 6 2.
+ <_>
+
+ <_>
+ 0 20 24 3 -1.
+ <_>
+ 8 20 8 3 3.
+ <_>
+
+ <_>
+ 13 11 6 7 -1.
+ <_>
+ 13 11 3 7 2.
+ <_>
+
+ <_>
+ 4 12 10 6 -1.
+ <_>
+ 4 14 10 2 3.
+ <_>
+
+ <_>
+ 13 11 6 6 -1.
+ <_>
+ 13 11 3 6 2.
+ <_>
+
+ <_>
+ 5 11 6 7 -1.
+ <_>
+ 8 11 3 7 2.
+ <_>
+
+ <_>
+ 7 4 11 12 -1.
+ <_>
+ 7 8 11 4 3.
+ <_>
+
+ <_>
+ 6 15 10 4 -1.
+ <_>
+ 6 17 10 2 2.
+ <_>
+
+ <_>
+ 14 0 6 9 -1.
+ <_>
+ 16 0 2 9 3.
+ <_>
+
+ <_>
+ 4 0 6 9 -1.
+ <_>
+ 6 0 2 9 3.
+ <_>
+
+ <_>
+ 11 2 4 15 -1.
+ <_>
+ 11 7 4 5 3.
+ <_>
+
+ <_>
+ 0 0 20 3 -1.
+ <_>
+ 0 1 20 1 3.
+ <_>
+
+ <_>
+ 13 18 10 6 -1.
+ <_>
+ 13 20 10 2 3.
+ <_>
+
+ <_>
+ 2 7 6 11 -1.
+ <_>
+ 5 7 3 11 2.
+ <_>
+
+ <_>
+ 10 14 10 9 -1.
+ <_>
+ 10 17 10 3 3.
+ <_>
+
+ <_>
+ 8 2 4 9 -1.
+ <_>
+ 10 2 2 9 2.
+ <_>
+
+ <_>
+ 14 3 10 4 -1.
+ <_>
+ 14 3 5 4 2.
+ <_>
+
+ <_>
+ 6 6 12 6 -1.
+ <_>
+ 6 6 6 3 2.
+ <_>
+ 12 9 6 3 2.
+ <_>
+
+ <_>
+ 8 8 8 10 -1.
+ <_>
+ 12 8 4 5 2.
+ <_>
+ 8 13 4 5 2.
+ <_>
+
+ <_>
+ 7 4 4 16 -1.
+ <_>
+ 7 12 4 8 2.
+ <_>
+
+ <_>
+ 8 8 9 4 -1.
+ <_>
+ 8 10 9 2 2.
+ <_>
+
+ <_>
+ 5 2 14 9 -1.
+ <_>
+ 5 5 14 3 3.
+ <_>
+
+ <_>
+ 3 16 19 8 -1.
+ <_>
+ 3 20 19 4 2.
+ <_>
+
+ <_>
+ 0 0 10 8 -1.
+ <_>
+ 5 0 5 8 2.
+ <_>
+
+ <_>
+ 5 2 16 18 -1.
+ <_>
+ 5 2 8 18 2.
+ <_>
+
+ <_>
+ 0 11 24 11 -1.
+ <_>
+ 8 11 8 11 3.
+ <_>
+
+ <_>
+ 3 3 18 5 -1.
+ <_>
+ 3 3 9 5 2.
+ <_>
+
+ <_>
+ 1 16 18 3 -1.
+ <_>
+ 1 17 18 1 3.
+ <_>
+
+ <_>
+ 5 17 18 3 -1.
+ <_>
+ 5 18 18 1 3.
+ <_>
+
+ <_>
+ 1 13 9 6 -1.
+ <_>
+ 1 15 9 2 3.
+ <_>
+
+ <_>
+ 1 9 23 10 -1.
+ <_>
+ 1 14 23 5 2.
+ <_>
+
+ <_>
+ 3 7 18 3 -1.
+ <_>
+ 3 8 18 1 3.
+ <_>
+
+ <_>
+ 6 8 12 3 -1.
+ <_>
+ 6 8 6 3 2.
+ <_>
+
+ <_>
+ 6 2 3 22 -1.
+ <_>
+ 7 2 1 22 3.
+ <_>
+
+ <_>
+ 14 17 10 6 -1.
+ <_>
+ 14 19 10 2 3.
+ <_>
+
+ <_>
+ 1 18 10 6 -1.
+ <_>
+ 1 20 10 2 3.
+ <_>
+
+ <_>
+ 11 3 6 12 -1.
+ <_>
+ 13 3 2 12 3.
+ <_>
+
+ <_>
+ 10 6 4 9 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 12 10 9 6 -1.
+ <_>
+ 15 10 3 6 3.
+ <_>
+
+ <_>
+ 2 11 6 9 -1.
+ <_>
+ 5 11 3 9 2.
+ <_>
+
+ <_>
+ 14 5 3 19 -1.
+ <_>
+ 15 5 1 19 3.
+ <_>
+
+ <_>
+ 6 6 9 6 -1.
+ <_>
+ 6 8 9 2 3.
+ <_>
+
+ <_>
+ 14 5 3 19 -1.
+ <_>
+ 15 5 1 19 3.
+ <_>
+
+ <_>
+ 0 3 6 9 -1.
+ <_>
+ 0 6 6 3 3.
+ <_>
+
+ <_>
+ 5 21 18 3 -1.
+ <_>
+ 5 22 18 1 3.
+ <_>
+
+ <_>
+ 1 10 18 4 -1.
+ <_>
+ 7 10 6 4 3.
+ <_>
+
+ <_>
+ 13 4 8 10 -1.
+ <_>
+ 17 4 4 5 2.
+ <_>
+ 13 9 4 5 2.
+ <_>
+
+ <_>
+ 7 8 9 6 -1.
+ <_>
+ 10 8 3 6 3.
+ <_>
+
+ <_>
+ 12 9 9 8 -1.
+ <_>
+ 15 9 3 8 3.
+ <_>
+
+ <_>
+ 0 6 5 12 -1.
+ <_>
+ 0 10 5 4 3.
+ <_>
+
+ <_>
+ 7 6 14 6 -1.
+ <_>
+ 14 6 7 3 2.
+ <_>
+ 7 9 7 3 2.
+ <_>
+
+ <_>
+ 7 5 3 19 -1.
+ <_>
+ 8 5 1 19 3.
+ <_>
+
+ <_>
+ 8 4 15 20 -1.
+ <_>
+ 13 4 5 20 3.
+ <_>
+
+ <_>
+ 1 4 15 20 -1.
+ <_>
+ 6 4 5 20 3.
+ <_>
+
+ <_>
+ 13 10 6 6 -1.
+ <_>
+ 13 10 3 6 2.
+ <_>
+
+ <_>
+ 5 10 6 6 -1.
+ <_>
+ 8 10 3 6 2.
+ <_>
+
+ <_>
+ 14 2 6 14 -1.
+ <_>
+ 17 2 3 7 2.
+ <_>
+ 14 9 3 7 2.
+ <_>
+
+ <_>
+ 4 2 6 14 -1.
+ <_>
+ 4 2 3 7 2.
+ <_>
+ 7 9 3 7 2.
+ <_>
+
+ <_>
+ 12 4 6 7 -1.
+ <_>
+ 12 4 3 7 2.
+ <_>
+
+ <_>
+ 9 4 6 9 -1.
+ <_>
+ 11 4 2 9 3.
+ <_>
+
+ <_>
+ 11 4 8 10 -1.
+ <_>
+ 11 4 4 10 2.
+ <_>
+
+ <_>
+ 5 4 8 10 -1.
+ <_>
+ 9 4 4 10 2.
+ <_>
+
+ <_>
+ 8 18 10 6 -1.
+ <_>
+ 8 20 10 2 3.
+ <_>
+
+ <_>
+ 1 18 21 6 -1.
+ <_>
+ 1 20 21 2 3.
+ <_>
+
+ <_>
+ 9 2 12 6 -1.
+ <_>
+ 9 2 6 6 2.
+ <_>
+
+ <_>
+ 3 2 12 6 -1.
+ <_>
+ 9 2 6 6 2.
+ <_>
+
+ <_>
+ 12 5 12 6 -1.
+ <_>
+ 18 5 6 3 2.
+ <_>
+ 12 8 6 3 2.
+ <_>
+
+ <_>
+ 8 8 6 9 -1.
+ <_>
+ 8 11 6 3 3.
+ <_>
+
+ <_>
+ 2 7 20 6 -1.
+ <_>
+ 2 9 20 2 3.
+ <_>
+
+ <_>
+ 0 5 12 6 -1.
+ <_>
+ 0 5 6 3 2.
+ <_>
+ 6 8 6 3 2.
+ <_>
+
+ <_>
+ 14 14 8 10 -1.
+ <_>
+ 18 14 4 5 2.
+ <_>
+ 14 19 4 5 2.
+ <_>
+
+ <_>
+ 2 14 8 10 -1.
+ <_>
+ 2 14 4 5 2.
+ <_>
+ 6 19 4 5 2.
+ <_>
+
+ <_>
+ 2 11 20 13 -1.
+ <_>
+ 2 11 10 13 2.
+ <_>
+
+ <_>
+ 6 9 12 5 -1.
+ <_>
+ 12 9 6 5 2.
+ <_>
+
+ <_>
+ 5 6 16 6 -1.
+ <_>
+ 13 6 8 3 2.
+ <_>
+ 5 9 8 3 2.
+ <_>
+
+ <_>
+ 1 19 9 4 -1.
+ <_>
+ 1 21 9 2 2.
+ <_>
+
+ <_>
+ 7 5 12 5 -1.
+ <_>
+ 11 5 4 5 3.
+ <_>
+
+ <_>
+ 3 5 14 12 -1.
+ <_>
+ 3 5 7 6 2.
+ <_>
+ 10 11 7 6 2.
+ <_>
+
+ <_>
+ 9 4 9 6 -1.
+ <_>
+ 12 4 3 6 3.
+ <_>
+
+ <_>
+ 2 6 19 3 -1.
+ <_>
+ 2 7 19 1 3.
+ <_>
+
+ <_>
+ 18 10 6 9 -1.
+ <_>
+ 18 13 6 3 3.
+ <_>
+
+ <_>
+ 3 7 18 2 -1.
+ <_>
+ 3 8 18 1 2.
+ <_>
+
+ <_>
+ 20 2 4 18 -1.
+ <_>
+ 22 2 2 9 2.
+ <_>
+ 20 11 2 9 2.
+ <_>
+
+ <_>
+ 2 18 20 3 -1.
+ <_>
+ 2 19 20 1 3.
+ <_>
+
+ <_>
+ 1 9 22 3 -1.
+ <_>
+ 1 10 22 1 3.
+ <_>
+
+ <_>
+ 0 2 4 18 -1.
+ <_>
+ 0 2 2 9 2.
+ <_>
+ 2 11 2 9 2.
+ <_>
+
+ <_>
+ 19 0 4 23 -1.
+ <_>
+ 19 0 2 23 2.
+ <_>
+
+ <_>
+ 0 3 6 19 -1.
+ <_>
+ 3 3 3 19 2.
+ <_>
+
+ <_>
+ 18 2 6 9 -1.
+ <_>
+ 20 2 2 9 3.
+ <_>
+
+ <_>
+ 0 5 10 6 -1.
+ <_>
+ 0 7 10 2 3.
+ <_>
+
+ <_>
+ 7 0 12 12 -1.
+ <_>
+ 13 0 6 6 2.
+ <_>
+ 7 6 6 6 2.
+ <_>
+
+ <_>
+ 0 3 24 6 -1.
+ <_>
+ 0 3 12 3 2.
+ <_>
+ 12 6 12 3 2.
+ <_>
+
+ <_>
+ 10 14 4 10 -1.
+ <_>
+ 10 19 4 5 2.
+ <_>
+
+ <_>
+ 8 9 4 15 -1.
+ <_>
+ 8 14 4 5 3.
+ <_>
+
+ <_>
+ 4 11 17 6 -1.
+ <_>
+ 4 14 17 3 2.
+ <_>
+
+ <_>
+ 2 5 18 8 -1.
+ <_>
+ 2 5 9 4 2.
+ <_>
+ 11 9 9 4 2.
+ <_>
+
+ <_>
+ 7 6 14 6 -1.
+ <_>
+ 14 6 7 3 2.
+ <_>
+ 7 9 7 3 2.
+ <_>
+
+ <_>
+ 3 6 14 6 -1.
+ <_>
+ 3 6 7 3 2.
+ <_>
+ 10 9 7 3 2.
+ <_>
+
+ <_>
+ 16 5 3 18 -1.
+ <_>
+ 17 5 1 18 3.
+ <_>
+
+ <_>
+ 5 5 3 18 -1.
+ <_>
+ 6 5 1 18 3.
+ <_>
+
+ <_>
+ 10 10 14 4 -1.
+ <_>
+ 10 12 14 2 2.
+ <_>
+
+ <_>
+ 4 10 9 4 -1.
+ <_>
+ 4 12 9 2 2.
+ <_>
+
+ <_>
+ 2 0 18 9 -1.
+ <_>
+ 2 3 18 3 3.
+ <_>
+
+ <_>
+ 6 3 12 8 -1.
+ <_>
+ 10 3 4 8 3.
+ <_>
+
+ <_>
+ 1 1 8 5 -1.
+ <_>
+ 5 1 4 5 2.
+ <_>
+
+ <_>
+ 12 7 7 8 -1.
+ <_>
+ 12 11 7 4 2.
+ <_>
+
+ <_>
+ 0 12 22 4 -1.
+ <_>
+ 0 14 22 2 2.
+ <_>
+
+ <_>
+ 15 6 4 15 -1.
+ <_>
+ 15 11 4 5 3.
+ <_>
+
+ <_>
+ 5 7 7 8 -1.
+ <_>
+ 5 11 7 4 2.
+ <_>
+
+ <_>
+ 8 18 9 4 -1.
+ <_>
+ 8 20 9 2 2.
+ <_>
+
+ <_>
+ 1 2 22 4 -1.
+ <_>
+ 1 4 22 2 2.
+ <_>
+
+ <_>
+ 17 3 6 17 -1.
+ <_>
+ 19 3 2 17 3.
+ <_>
+
+ <_>
+ 8 2 8 18 -1.
+ <_>
+ 8 11 8 9 2.
+ <_>
+
+ <_>
+ 17 0 6 12 -1.
+ <_>
+ 20 0 3 6 2.
+ <_>
+ 17 6 3 6 2.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 15 5 9 12 -1.
+ <_>
+ 15 11 9 6 2.
+ <_>
+
+ <_>
+ 2 22 18 2 -1.
+ <_>
+ 2 23 18 1 2.
+ <_>
+
+ <_>
+ 10 10 12 6 -1.
+ <_>
+ 16 10 6 3 2.
+ <_>
+ 10 13 6 3 2.
+ <_>
+
+ <_>
+ 0 1 4 11 -1.
+ <_>
+ 2 1 2 11 2.
+ <_>
+
+ <_>
+ 20 0 4 10 -1.
+ <_>
+ 20 0 2 10 2.
+ <_>
+
+ <_>
+ 1 3 6 17 -1.
+ <_>
+ 3 3 2 17 3.
+ <_>
+
+ <_>
+ 15 15 9 6 -1.
+ <_>
+ 15 17 9 2 3.
+ <_>
+
+ <_>
+ 0 13 8 9 -1.
+ <_>
+ 0 16 8 3 3.
+ <_>
+
+ <_>
+ 16 8 6 12 -1.
+ <_>
+ 16 12 6 4 3.
+ <_>
+
+ <_>
+ 2 8 6 12 -1.
+ <_>
+ 2 12 6 4 3.
+ <_>
+
+ <_>
+ 10 2 4 15 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 1 5 19 3 -1.
+ <_>
+ 1 6 19 1 3.
+ <_>
+
+ <_>
+ 11 8 9 7 -1.
+ <_>
+ 14 8 3 7 3.
+ <_>
+
+ <_>
+ 3 8 12 9 -1.
+ <_>
+ 3 11 12 3 3.
+ <_>
+
+ <_>
+ 3 6 18 3 -1.
+ <_>
+ 3 7 18 1 3.
+ <_>
+
+ <_>
+ 10 0 4 12 -1.
+ <_>
+ 10 6 4 6 2.
+ <_>
+
+ <_>
+ 3 9 18 14 -1.
+ <_>
+ 3 9 9 14 2.
+ <_>
+
+ <_>
+ 0 0 4 9 -1.
+ <_>
+ 2 0 2 9 2.
+ <_>
+
+ <_>
+ 12 5 4 18 -1.
+ <_>
+ 12 5 2 18 2.
+ <_>
+
+ <_>
+ 8 5 4 18 -1.
+ <_>
+ 10 5 2 18 2.
+ <_>
+
+ <_>
+ 10 5 6 10 -1.
+ <_>
+ 12 5 2 10 3.
+ <_>
+
+ <_>
+ 9 4 4 11 -1.
+ <_>
+ 11 4 2 11 2.
+ <_>
+
+ <_>
+ 4 16 18 3 -1.
+ <_>
+ 4 17 18 1 3.
+ <_>
+
+ <_>
+ 0 16 20 3 -1.
+ <_>
+ 0 17 20 1 3.
+ <_>
+
+ <_>
+ 9 9 6 12 -1.
+ <_>
+ 9 13 6 4 3.
+ <_>
+
+ <_>
+ 8 13 8 8 -1.
+ <_>
+ 8 17 8 4 2.
+ <_>
+
+ <_>
+ 13 10 3 12 -1.
+ <_>
+ 13 16 3 6 2.
+ <_>
+
+ <_>
+ 5 9 14 14 -1.
+ <_>
+ 5 9 7 7 2.
+ <_>
+ 12 16 7 7 2.
+ <_>
+
+ <_>
+ 0 0 24 10 -1.
+ <_>
+ 12 0 12 5 2.
+ <_>
+ 0 5 12 5 2.
+ <_>
+
+ <_>
+ 1 11 18 2 -1.
+ <_>
+ 1 12 18 1 2.
+ <_>
+
+ <_>
+ 19 5 5 12 -1.
+ <_>
+ 19 9 5 4 3.
+ <_>
+
+ <_>
+ 0 5 5 12 -1.
+ <_>
+ 0 9 5 4 3.
+ <_>
+
+ <_>
+ 16 6 8 18 -1.
+ <_>
+ 20 6 4 9 2.
+ <_>
+ 16 15 4 9 2.
+ <_>
+
+ <_>
+ 0 6 8 18 -1.
+ <_>
+ 0 6 4 9 2.
+ <_>
+ 4 15 4 9 2.
+ <_>
+
+ <_>
+ 12 5 12 12 -1.
+ <_>
+ 18 5 6 6 2.
+ <_>
+ 12 11 6 6 2.
+ <_>
+
+ <_>
+ 7 6 6 9 -1.
+ <_>
+ 9 6 2 9 3.
+ <_>
+
+ <_>
+ 9 13 6 11 -1.
+ <_>
+ 11 13 2 11 3.
+ <_>
+
+ <_>
+ 0 5 12 12 -1.
+ <_>
+ 0 5 6 6 2.
+ <_>
+ 6 11 6 6 2.
+ <_>
+
+ <_>
+ 1 2 23 3 -1.
+ <_>
+ 1 3 23 1 3.
+ <_>
+
+ <_>
+ 1 15 19 3 -1.
+ <_>
+ 1 16 19 1 3.
+ <_>
+
+ <_>
+ 13 17 11 4 -1.
+ <_>
+ 13 19 11 2 2.
+ <_>
+
+ <_>
+ 0 13 8 5 -1.
+ <_>
+ 4 13 4 5 2.
+ <_>
+
+ <_>
+ 12 10 10 4 -1.
+ <_>
+ 12 10 5 4 2.
+ <_>
+
+ <_>
+ 4 6 9 9 -1.
+ <_>
+ 4 9 9 3 3.
+ <_>
+
+ <_>
+ 15 14 9 6 -1.
+ <_>
+ 15 16 9 2 3.
+ <_>
+
+ <_>
+ 1 12 9 6 -1.
+ <_>
+ 1 14 9 2 3.
+ <_>
+
+ <_>
+ 3 10 20 8 -1.
+ <_>
+ 13 10 10 4 2.
+ <_>
+ 3 14 10 4 2.
+ <_>
+
+ <_>
+ 2 0 9 18 -1.
+ <_>
+ 5 0 3 18 3.
+ <_>
+
+ <_>
+ 13 11 9 10 -1.
+ <_>
+ 16 11 3 10 3.
+ <_>
+
+ <_>
+ 1 2 8 5 -1.
+ <_>
+ 5 2 4 5 2.
+ <_>
+
+ <_>
+ 3 4 21 6 -1.
+ <_>
+ 10 4 7 6 3.
+ <_>
+
+ <_>
+ 7 0 10 14 -1.
+ <_>
+ 7 0 5 7 2.
+ <_>
+ 12 7 5 7 2.
+ <_>
+
+ <_>
+ 12 17 12 4 -1.
+ <_>
+ 12 19 12 2 2.
+ <_>
+
+ <_>
+ 0 6 23 4 -1.
+ <_>
+ 0 8 23 2 2.
+ <_>
+
+ <_>
+ 13 10 8 10 -1.
+ <_>
+ 17 10 4 5 2.
+ <_>
+ 13 15 4 5 2.
+ <_>
+
+ <_>
+ 0 16 18 3 -1.
+ <_>
+ 0 17 18 1 3.
+ <_>
+
+ <_>
+ 15 16 9 4 -1.
+ <_>
+ 15 18 9 2 2.
+ <_>
+
+ <_>
+ 0 16 9 4 -1.
+ <_>
+ 0 18 9 2 2.
+ <_>
+
+ <_>
+ 13 11 6 6 -1.
+ <_>
+ 13 11 3 6 2.
+ <_>
+
+ <_>
+ 5 11 6 6 -1.
+ <_>
+ 8 11 3 6 2.
+ <_>
+
+ <_>
+ 0 3 24 6 -1.
+ <_>
+ 12 3 12 3 2.
+ <_>
+ 0 6 12 3 2.
+ <_>
+
+ <_>
+ 2 4 18 3 -1.
+ <_>
+ 2 5 18 1 3.
+ <_>
+
+ <_>
+ 0 0 24 4 -1.
+ <_>
+ 12 0 12 2 2.
+ <_>
+ 0 2 12 2 2.
+ <_>
+
+ <_>
+ 1 16 18 3 -1.
+ <_>
+ 1 17 18 1 3.
+ <_>
+
+ <_>
+ 15 15 9 6 -1.
+ <_>
+ 15 17 9 2 3.
+ <_>
+
+ <_>
+ 0 15 9 6 -1.
+ <_>
+ 0 17 9 2 3.
+ <_>
+
+ <_>
+ 6 17 18 3 -1.
+ <_>
+ 6 18 18 1 3.
+ <_>
+
+ <_>
+ 8 8 6 10 -1.
+ <_>
+ 10 8 2 10 3.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 8 8 5 8 -1.
+ <_>
+ 8 12 5 4 2.
+ <_>
+
+ <_>
+ 12 8 6 8 -1.
+ <_>
+ 12 12 6 4 2.
+ <_>
+
+ <_>
+ 6 5 6 11 -1.
+ <_>
+ 8 5 2 11 3.
+ <_>
+
+ <_>
+ 13 6 8 9 -1.
+ <_>
+ 13 9 8 3 3.
+ <_>
+
+ <_>
+ 1 7 21 6 -1.
+ <_>
+ 1 9 21 2 3.
+ <_>
+
+ <_>
+ 15 5 3 12 -1.
+ <_>
+ 15 11 3 6 2.
+ <_>
+
+ <_>
+ 6 9 11 12 -1.
+ <_>
+ 6 13 11 4 3.
+ <_>
+
+ <_>
+ 13 8 10 8 -1.
+ <_>
+ 18 8 5 4 2.
+ <_>
+ 13 12 5 4 2.
+ <_>
+
+ <_>
+ 5 8 12 3 -1.
+ <_>
+ 11 8 6 3 2.
+ <_>
+
+ <_>
+ 6 11 18 4 -1.
+ <_>
+ 12 11 6 4 3.
+ <_>
+
+ <_>
+ 0 0 22 22 -1.
+ <_>
+ 0 11 22 11 2.
+ <_>
+
+ <_>
+ 11 2 6 8 -1.
+ <_>
+ 11 6 6 4 2.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 11 0 2 9 3.
+ <_>
+
+ <_>
+ 10 0 6 9 -1.
+ <_>
+ 12 0 2 9 3.
+ <_>
+
+ <_>
+ 8 3 6 14 -1.
+ <_>
+ 8 3 3 7 2.
+ <_>
+ 11 10 3 7 2.
+ <_>
+
+ <_>
+ 3 10 18 8 -1.
+ <_>
+ 9 10 6 8 3.
+ <_>
+
+ <_>
+ 10 0 3 14 -1.
+ <_>
+ 10 7 3 7 2.
+ <_>
+
+ <_>
+ 4 3 16 20 -1.
+ <_>
+ 4 13 16 10 2.
+ <_>
+
+ <_>
+ 9 4 6 10 -1.
+ <_>
+ 11 4 2 10 3.
+ <_>
+
+ <_>
+ 5 0 16 4 -1.
+ <_>
+ 5 2 16 2 2.
+ <_>
+
+ <_>
+ 2 5 18 4 -1.
+ <_>
+ 8 5 6 4 3.
+ <_>
+
+ <_>
+ 13 0 6 9 -1.
+ <_>
+ 15 0 2 9 3.
+ <_>
+
+ <_>
+ 8 4 8 5 -1.
+ <_>
+ 12 4 4 5 2.
+ <_>
+
+ <_>
+ 12 10 10 4 -1.
+ <_>
+ 12 10 5 4 2.
+ <_>
+
+ <_>
+ 2 10 10 4 -1.
+ <_>
+ 7 10 5 4 2.
+ <_>
+
+ <_>
+ 7 11 12 5 -1.
+ <_>
+ 11 11 4 5 3.
+ <_>
+
+ <_>
+ 3 10 8 10 -1.
+ <_>
+ 3 10 4 5 2.
+ <_>
+ 7 15 4 5 2.
+ <_>
+
+ <_>
+ 11 12 9 8 -1.
+ <_>
+ 14 12 3 8 3.
+ <_>
+
+ <_>
+ 0 21 24 3 -1.
+ <_>
+ 8 21 8 3 3.
+ <_>
+
+ <_>
+ 3 20 18 4 -1.
+ <_>
+ 9 20 6 4 3.
+ <_>
+
+ <_>
+ 1 15 9 6 -1.
+ <_>
+ 1 17 9 2 3.
+ <_>
+
+ <_>
+ 11 17 10 4 -1.
+ <_>
+ 11 19 10 2 2.
+ <_>
+
+ <_>
+ 9 12 4 12 -1.
+ <_>
+ 9 18 4 6 2.
+ <_>
+
+ <_>
+ 9 6 9 6 -1.
+ <_>
+ 12 6 3 6 3.
+ <_>
+
+ <_>
+ 1 13 6 9 -1.
+ <_>
+ 1 16 6 3 3.
+ <_>
+
+ <_>
+ 6 16 12 4 -1.
+ <_>
+ 6 18 12 2 2.
+ <_>
+
+ <_>
+ 1 5 20 3 -1.
+ <_>
+ 1 6 20 1 3.
+ <_>
+
+ <_>
+ 8 1 9 9 -1.
+ <_>
+ 8 4 9 3 3.
+ <_>
+
+ <_>
+ 2 19 9 4 -1.
+ <_>
+ 2 21 9 2 2.
+ <_>
+
+ <_>
+ 11 1 4 18 -1.
+ <_>
+ 11 7 4 6 3.
+ <_>
+
+ <_>
+ 7 2 8 12 -1.
+ <_>
+ 7 2 4 6 2.
+ <_>
+ 11 8 4 6 2.
+ <_>
+
+ <_>
+ 11 10 9 8 -1.
+ <_>
+ 14 10 3 8 3.
+ <_>
+
+ <_>
+ 5 11 12 5 -1.
+ <_>
+ 9 11 4 5 3.
+ <_>
+
+ <_>
+ 11 9 9 6 -1.
+ <_>
+ 14 9 3 6 3.
+ <_>
+
+ <_>
+ 5 10 6 9 -1.
+ <_>
+ 7 10 2 9 3.
+ <_>
+
+ <_>
+ 4 7 5 12 -1.
+ <_>
+ 4 11 5 4 3.
+ <_>
+
+ <_>
+ 2 0 21 6 -1.
+ <_>
+ 9 0 7 6 3.
+ <_>
+
+ <_>
+ 7 6 10 6 -1.
+ <_>
+ 7 8 10 2 3.
+ <_>
+
+ <_>
+ 9 0 6 15 -1.
+ <_>
+ 11 0 2 15 3.
+ <_>
+
+ <_>
+ 2 2 18 2 -1.
+ <_>
+ 2 3 18 1 2.
+ <_>
+
+ <_>
+ 8 17 8 6 -1.
+ <_>
+ 8 20 8 3 2.
+ <_>
+
+ <_>
+ 3 0 18 2 -1.
+ <_>
+ 3 1 18 1 2.
+ <_>
+
+ <_>
+ 8 0 9 6 -1.
+ <_>
+ 11 0 3 6 3.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 6 7 12 5 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 0 3 6 9 -1.
+ <_>
+ 2 3 2 9 3.
+ <_>
+
+ <_>
+ 20 2 4 9 -1.
+ <_>
+ 20 2 2 9 2.
+ <_>
+
+ <_>
+ 0 2 4 9 -1.
+ <_>
+ 2 2 2 9 2.
+ <_>
+
+ <_>
+ 0 1 24 4 -1.
+ <_>
+ 12 1 12 2 2.
+ <_>
+ 0 3 12 2 2.
+ <_>
+
+ <_>
+ 0 16 9 6 -1.
+ <_>
+ 0 18 9 2 3.
+ <_>
+
+ <_>
+ 14 13 9 6 -1.
+ <_>
+ 14 15 9 2 3.
+ <_>
+
+ <_>
+ 0 15 19 3 -1.
+ <_>
+ 0 16 19 1 3.
+ <_>
+
+ <_>
+ 1 5 22 12 -1.
+ <_>
+ 12 5 11 6 2.
+ <_>
+ 1 11 11 6 2.
+ <_>
+
+ <_>
+ 5 13 6 6 -1.
+ <_>
+ 8 13 3 6 2.
+ <_>
+
+ <_>
+ 4 2 20 3 -1.
+ <_>
+ 4 3 20 1 3.
+ <_>
+
+ <_>
+ 8 14 6 10 -1.
+ <_>
+ 10 14 2 10 3.
+ <_>
+
+ <_>
+ 6 12 16 6 -1.
+ <_>
+ 14 12 8 3 2.
+ <_>
+ 6 15 8 3 2.
+ <_>
+
+ <_>
+ 2 13 8 9 -1.
+ <_>
+ 2 16 8 3 3.
+ <_>
+
+ <_>
+ 11 8 6 14 -1.
+ <_>
+ 14 8 3 7 2.
+ <_>
+ 11 15 3 7 2.
+ <_>
+
+ <_>
+ 2 12 16 6 -1.
+ <_>
+ 2 12 8 3 2.
+ <_>
+ 10 15 8 3 2.
+ <_>
+
+ <_>
+ 5 16 16 8 -1.
+ <_>
+ 5 20 16 4 2.
+ <_>
+
+ <_>
+ 9 1 4 12 -1.
+ <_>
+ 9 7 4 6 2.
+ <_>
+
+ <_>
+ 8 2 8 10 -1.
+ <_>
+ 12 2 4 5 2.
+ <_>
+ 8 7 4 5 2.
+ <_>
+
+ <_>
+ 6 6 12 6 -1.
+ <_>
+ 6 6 6 3 2.
+ <_>
+ 12 9 6 3 2.
+ <_>
+
+ <_>
+ 10 7 6 9 -1.
+ <_>
+ 12 7 2 9 3.
+ <_>
+
+ <_>
+ 0 0 8 12 -1.
+ <_>
+ 0 0 4 6 2.
+ <_>
+ 4 6 4 6 2.
+ <_>
+
+ <_>
+ 18 8 6 9 -1.
+ <_>
+ 18 11 6 3 3.
+ <_>
+
+ <_>
+ 2 12 6 6 -1.
+ <_>
+ 5 12 3 6 2.
+ <_>
+
+ <_>
+ 3 21 21 3 -1.
+ <_>
+ 10 21 7 3 3.
+ <_>
+
+ <_>
+ 2 0 16 6 -1.
+ <_>
+ 2 3 16 3 2.
+ <_>
+
+ <_>
+ 13 6 7 6 -1.
+ <_>
+ 13 9 7 3 2.
+ <_>
+
+ <_>
+ 6 4 4 14 -1.
+ <_>
+ 6 11 4 7 2.
+ <_>
+
+ <_>
+ 9 7 6 9 -1.
+ <_>
+ 11 7 2 9 3.
+ <_>
+
+ <_>
+ 7 8 6 14 -1.
+ <_>
+ 7 8 3 7 2.
+ <_>
+ 10 15 3 7 2.
+ <_>
+
+ <_>
+ 18 8 4 16 -1.
+ <_>
+ 18 16 4 8 2.
+ <_>
+
+ <_>
+ 9 14 6 10 -1.
+ <_>
+ 11 14 2 10 3.
+ <_>
+
+ <_>
+ 6 11 12 5 -1.
+ <_>
+ 10 11 4 5 3.
+ <_>
+
+ <_>
+ 0 12 23 3 -1.
+ <_>
+ 0 13 23 1 3.
+ <_>
+
+ <_>
+ 13 0 6 12 -1.
+ <_>
+ 15 0 2 12 3.
+ <_>
+
+ <_>
+ 0 10 12 5 -1.
+ <_>
+ 4 10 4 5 3.
+ <_>
+
+ <_>
+ 13 2 10 4 -1.
+ <_>
+ 13 4 10 2 2.
+ <_>
+
+ <_>
+ 5 0 6 12 -1.
+ <_>
+ 7 0 2 12 3.
+ <_>
+
+ <_>
+ 11 6 9 6 -1.
+ <_>
+ 14 6 3 6 3.
+ <_>
+
+ <_>
+ 4 6 9 6 -1.
+ <_>
+ 7 6 3 6 3.
+ <_>
+
+ <_>
+ 6 11 18 13 -1.
+ <_>
+ 12 11 6 13 3.
+ <_>
+
+ <_>
+ 0 11 18 13 -1.
+ <_>
+ 6 11 6 13 3.
+ <_>
+
+ <_>
+ 12 16 12 6 -1.
+ <_>
+ 16 16 4 6 3.
+ <_>
+
+ <_>
+ 0 6 21 3 -1.
+ <_>
+ 0 7 21 1 3.
+ <_>
+
+ <_>
+ 12 16 12 6 -1.
+ <_>
+ 16 16 4 6 3.
+ <_>
+
+ <_>
+ 5 7 6 14 -1.
+ <_>
+ 5 14 6 7 2.
+ <_>
+
+ <_>
+ 5 10 19 2 -1.
+ <_>
+ 5 11 19 1 2.
+ <_>
+
+ <_>
+ 5 4 14 4 -1.
+ <_>
+ 5 6 14 2 2.
+ <_>
+
+ <_>
+ 3 18 18 4 -1.
+ <_>
+ 9 18 6 4 3.
+ <_>
+
+ <_>
+ 7 0 4 9 -1.
+ <_>
+ 9 0 2 9 2.
+ <_>
+
+ <_>
+ 13 3 11 4 -1.
+ <_>
+ 13 5 11 2 2.
+ <_>
+
+ <_>
+ 2 0 9 6 -1.
+ <_>
+ 5 0 3 6 3.
+ <_>
+
+ <_>
+ 19 1 4 23 -1.
+ <_>
+ 19 1 2 23 2.
+ <_>
+
+ <_>
+ 1 1 4 23 -1.
+ <_>
+ 3 1 2 23 2.
+ <_>
+
+ <_>
+ 5 16 18 3 -1.
+ <_>
+ 5 17 18 1 3.
+ <_>
+
+ <_>
+ 0 3 11 4 -1.
+ <_>
+ 0 5 11 2 2.
+ <_>
+
+ <_>
+ 2 16 20 3 -1.
+ <_>
+ 2 17 20 1 3.
+ <_>
+
+ <_>
+ 5 3 13 4 -1.
+ <_>
+ 5 5 13 2 2.
+ <_>
+
+ <_>
+ 1 9 22 15 -1.
+ <_>
+ 1 9 11 15 2.
+ <_>
+
+ <_>
+ 3 4 14 3 -1.
+ <_>
+ 10 4 7 3 2.
+ <_>
+
+ <_>
+ 8 7 10 4 -1.
+ <_>
+ 8 7 5 4 2.
+ <_>
+
+ <_>
+ 6 7 10 4 -1.
+ <_>
+ 11 7 5 4 2.
+ <_>
+
+ <_>
+ 10 4 6 9 -1.
+ <_>
+ 12 4 2 9 3.
+ <_>
+
+ <_>
+ 1 12 9 6 -1.
+ <_>
+ 4 12 3 6 3.
+ <_>
+
+ <_>
+ 8 3 8 10 -1.
+ <_>
+ 12 3 4 5 2.
+ <_>
+ 8 8 4 5 2.
+ <_>
+
+ <_>
+ 3 6 16 6 -1.
+ <_>
+ 3 6 8 3 2.
+ <_>
+ 11 9 8 3 2.
+ <_>
+
+ <_>
+ 5 6 14 6 -1.
+ <_>
+ 5 9 14 3 2.
+ <_>
+
+ <_>
+ 4 3 9 6 -1.
+ <_>
+ 4 5 9 2 3.
+ <_>
+
+ <_>
+ 6 3 18 2 -1.
+ <_>
+ 6 4 18 1 2.
+ <_>
+
+ <_>
+ 7 6 9 6 -1.
+ <_>
+ 10 6 3 6 3.
+ <_>
+
+ <_>
+ 0 1 24 3 -1.
+ <_>
+ 0 2 24 1 3.
+ <_>
+
+ <_>
+ 0 17 10 6 -1.
+ <_>
+ 0 19 10 2 3.
+ <_>
+
+ <_>
+ 3 18 18 3 -1.
+ <_>
+ 3 19 18 1 3.
+ <_>
+
+ <_>
+ 2 5 6 16 -1.
+ <_>
+ 2 5 3 8 2.
+ <_>
+ 5 13 3 8 2.
+ <_>
+
+ <_>
+ 7 6 11 6 -1.
+ <_>
+ 7 8 11 2 3.
+ <_>
+
+ <_>
+ 5 2 12 22 -1.
+ <_>
+ 5 13 12 11 2.
+ <_>
+
+ <_>
+ 10 7 4 10 -1.
+ <_>
+ 10 12 4 5 2.
+ <_>
+
+ <_>
+ 9 0 4 18 -1.
+ <_>
+ 9 6 4 6 3.
+ <_>
+
+ <_>
+ 18 8 6 9 -1.
+ <_>
+ 18 11 6 3 3.
+ <_>
+
+ <_>
+ 4 7 15 10 -1.
+ <_>
+ 9 7 5 10 3.
+ <_>
+
+ <_>
+ 10 5 6 9 -1.
+ <_>
+ 12 5 2 9 3.
+ <_>
+
+ <_>
+ 9 9 6 10 -1.
+ <_>
+ 11 9 2 10 3.
+ <_>
+
+ <_>
+ 11 14 6 10 -1.
+ <_>
+ 13 14 2 10 3.
+ <_>
+
+ <_>
+ 7 14 6 10 -1.
+ <_>
+ 9 14 2 10 3.
+ <_>
+
+ <_>
+ 4 8 16 9 -1.
+ <_>
+ 4 11 16 3 3.
+ <_>
+
+ <_>
+ 2 11 20 3 -1.
+ <_>
+ 2 12 20 1 3.
+ <_>
+
+ <_>
+ 13 0 4 13 -1.
+ <_>
+ 13 0 2 13 2.
+ <_>
+
+ <_>
+ 7 0 4 13 -1.
+ <_>
+ 9 0 2 13 2.
+ <_>
+
+ <_>
+ 3 1 18 7 -1.
+ <_>
+ 9 1 6 7 3.
+ <_>
+
+ <_>
+ 1 11 6 9 -1.
+ <_>
+ 1 14 6 3 3.
+ <_>
+
+ <_>
+ 8 18 9 6 -1.
+ <_>
+ 8 20 9 2 3.
+ <_>
+
+ <_>
+ 3 9 15 6 -1.
+ <_>
+ 3 11 15 2 3.
+ <_>
+
+ <_>
+ 5 10 19 2 -1.
+ <_>
+ 5 11 19 1 2.
+ <_>
+
+ <_>
+ 8 6 7 16 -1.
+ <_>
+ 8 14 7 8 2.
+ <_>
+
+ <_>
+ 9 14 9 6 -1.
+ <_>
+ 9 16 9 2 3.
+ <_>
+
+ <_>
+ 0 7 8 12 -1.
+ <_>
+ 0 11 8 4 3.
+ <_>
+
+ <_>
+ 6 4 18 3 -1.
+ <_>
+ 6 5 18 1 3.
+ <_>
+
+ <_>
+ 0 16 12 6 -1.
+ <_>
+ 4 16 4 6 3.
+ <_>
+
+ <_>
+ 13 13 9 4 -1.
+ <_>
+ 13 15 9 2 2.
+ <_>
+
+ <_>
+ 5 8 14 14 -1.
+ <_>
+ 5 8 7 7 2.
+ <_>
+ 12 15 7 7 2.
+ <_>
+
+ <_>
+ 1 16 22 6 -1.
+ <_>
+ 12 16 11 3 2.
+ <_>
+ 1 19 11 3 2.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 11 0 2 9 3.
+ <_>
+
+ <_>
+ 9 5 10 10 -1.
+ <_>
+ 14 5 5 5 2.
+ <_>
+ 9 10 5 5 2.
+ <_>
+
+ <_>
+ 5 5 10 10 -1.
+ <_>
+ 5 5 5 5 2.
+ <_>
+ 10 10 5 5 2.
+ <_>
+
+ <_>
+ 4 6 16 6 -1.
+ <_>
+ 12 6 8 3 2.
+ <_>
+ 4 9 8 3 2.
+ <_>
+
+ <_>
+ 0 7 6 9 -1.
+ <_>
+ 0 10 6 3 3.
+ <_>
+
+ <_>
+ 16 10 8 14 -1.
+ <_>
+ 20 10 4 7 2.
+ <_>
+ 16 17 4 7 2.
+ <_>
+
+ <_>
+ 9 12 6 12 -1.
+ <_>
+ 9 18 6 6 2.
+ <_>
+
+ <_>
+ 8 10 8 12 -1.
+ <_>
+ 12 10 4 6 2.
+ <_>
+ 8 16 4 6 2.
+ <_>
+
+ <_>
+ 8 0 4 9 -1.
+ <_>
+ 10 0 2 9 2.
+ <_>
+
+ <_>
+ 10 4 8 16 -1.
+ <_>
+ 14 4 4 8 2.
+ <_>
+ 10 12 4 8 2.
+ <_>
+
+ <_>
+ 7 10 10 6 -1.
+ <_>
+ 7 12 10 2 3.
+ <_>
+
+ <_>
+ 5 6 14 14 -1.
+ <_>
+ 12 6 7 7 2.
+ <_>
+ 5 13 7 7 2.
+ <_>
+
+ <_>
+ 2 11 20 2 -1.
+ <_>
+ 2 12 20 1 2.
+ <_>
+
+ <_>
+ 18 8 4 16 -1.
+ <_>
+ 18 16 4 8 2.
+ <_>
+
+ <_>
+ 1 11 12 10 -1.
+ <_>
+ 1 11 6 5 2.
+ <_>
+ 7 16 6 5 2.
+ <_>
+
+ <_>
+ 6 9 12 4 -1.
+ <_>
+ 6 11 12 2 2.
+ <_>
+
+ <_>
+ 9 12 6 7 -1.
+ <_>
+ 12 12 3 7 2.
+ <_>
+
+ <_>
+ 10 4 8 16 -1.
+ <_>
+ 14 4 4 8 2.
+ <_>
+ 10 12 4 8 2.
+ <_>
+
+ <_>
+ 6 4 8 16 -1.
+ <_>
+ 6 4 4 8 2.
+ <_>
+ 10 12 4 8 2.
+ <_>
+
+ <_>
+ 8 9 9 6 -1.
+ <_>
+ 11 9 3 6 3.
+ <_>
+
+ <_>
+ 1 5 16 12 -1.
+ <_>
+ 1 5 8 6 2.
+ <_>
+ 9 11 8 6 2.
+ <_>
+
+ <_>
+ 9 9 6 8 -1.
+ <_>
+ 9 9 3 8 2.
+ <_>
+
+ <_>
+ 6 0 3 18 -1.
+ <_>
+ 7 0 1 18 3.
+ <_>
+
+ <_>
+ 17 9 5 14 -1.
+ <_>
+ 17 16 5 7 2.
+ <_>
+
+ <_>
+ 2 9 5 14 -1.
+ <_>
+ 2 16 5 7 2.
+ <_>
+
+ <_>
+ 7 4 10 6 -1.
+ <_>
+ 7 7 10 3 2.
+ <_>
+
+ <_>
+ 1 3 23 18 -1.
+ <_>
+ 1 9 23 6 3.
+ <_>
+
+ <_>
+ 1 1 21 3 -1.
+ <_>
+ 8 1 7 3 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 3 18 12 6 -1.
+ <_>
+ 3 18 6 3 2.
+ <_>
+ 9 21 6 3 2.
+ <_>
+
+ <_>
+ 16 8 8 16 -1.
+ <_>
+ 20 8 4 8 2.
+ <_>
+ 16 16 4 8 2.
+ <_>
+
+ <_>
+ 0 19 24 4 -1.
+ <_>
+ 8 19 8 4 3.
+ <_>
+
+ <_>
+ 16 8 8 16 -1.
+ <_>
+ 20 8 4 8 2.
+ <_>
+ 16 16 4 8 2.
+ <_>
+
+ <_>
+ 0 8 8 16 -1.
+ <_>
+ 0 8 4 8 2.
+ <_>
+ 4 16 4 8 2.
+ <_>
+
+ <_>
+ 8 12 8 10 -1.
+ <_>
+ 8 17 8 5 2.
+ <_>
+
+ <_>
+ 5 7 5 8 -1.
+ <_>
+ 5 11 5 4 2.
+ <_>
+
+ <_>
+ 4 1 19 2 -1.
+ <_>
+ 4 2 19 1 2.
+ <_>
+
+ <_>
+ 0 12 24 9 -1.
+ <_>
+ 8 12 8 9 3.
+ <_>
+
+ <_>
+ 6 0 13 8 -1.
+ <_>
+ 6 4 13 4 2.
+ <_>
+
+ <_>
+ 0 0 24 3 -1.
+ <_>
+ 0 1 24 1 3.
+ <_>
+
+ <_>
+ 20 3 4 11 -1.
+ <_>
+ 20 3 2 11 2.
+ <_>
+
+ <_>
+ 8 6 6 9 -1.
+ <_>
+ 10 6 2 9 3.
+ <_>
+
+ <_>
+ 6 11 12 8 -1.
+ <_>
+ 12 11 6 4 2.
+ <_>
+ 6 15 6 4 2.
+ <_>
+
+ <_>
+ 0 8 12 6 -1.
+ <_>
+ 0 8 6 3 2.
+ <_>
+ 6 11 6 3 2.
+ <_>
+
+ <_>
+ 6 17 18 3 -1.
+ <_>
+ 6 18 18 1 3.
+ <_>
+
+ <_>
+ 0 14 9 6 -1.
+ <_>
+ 0 16 9 2 3.
+ <_>
+
+ <_>
+ 20 3 4 9 -1.
+ <_>
+ 20 3 2 9 2.
+ <_>
+
+ <_>
+ 0 3 4 9 -1.
+ <_>
+ 2 3 2 9 2.
+ <_>
+
+ <_>
+ 15 0 9 19 -1.
+ <_>
+ 18 0 3 19 3.
+ <_>
+
+ <_>
+ 0 0 9 19 -1.
+ <_>
+ 3 0 3 19 3.
+ <_>
+
+ <_>
+ 13 11 6 8 -1.
+ <_>
+ 13 11 3 8 2.
+ <_>
+
+ <_>
+ 5 11 6 8 -1.
+ <_>
+ 8 11 3 8 2.
+ <_>
+
+ <_>
+ 5 11 19 3 -1.
+ <_>
+ 5 12 19 1 3.
+ <_>
+
+ <_>
+ 3 20 18 4 -1.
+ <_>
+ 9 20 6 4 3.
+ <_>
+
+ <_>
+ 6 6 16 6 -1.
+ <_>
+ 6 8 16 2 3.
+ <_>
+
+ <_>
+ 6 0 9 6 -1.
+ <_>
+ 9 0 3 6 3.
+ <_>
+
+ <_>
+ 10 3 4 14 -1.
+ <_>
+ 10 10 4 7 2.
+ <_>
+
+ <_>
+ 1 5 15 12 -1.
+ <_>
+ 1 11 15 6 2.
+ <_>
+
+ <_>
+ 11 12 8 5 -1.
+ <_>
+ 11 12 4 5 2.
+ <_>
+
+ <_>
+ 5 0 6 9 -1.
+ <_>
+ 7 0 2 9 3.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 5 5 12 8 -1.
+ <_>
+ 5 5 6 4 2.
+ <_>
+ 11 9 6 4 2.
+ <_>
+
+ <_>
+ 13 12 11 6 -1.
+ <_>
+ 13 14 11 2 3.
+ <_>
+
+ <_>
+ 0 13 21 3 -1.
+ <_>
+ 0 14 21 1 3.
+ <_>
+
+ <_>
+ 8 1 8 12 -1.
+ <_>
+ 12 1 4 6 2.
+ <_>
+ 8 7 4 6 2.
+ <_>
+
+ <_>
+ 1 0 6 12 -1.
+ <_>
+ 1 0 3 6 2.
+ <_>
+ 4 6 3 6 2.
+ <_>
+
+ <_>
+ 2 2 21 2 -1.
+ <_>
+ 2 3 21 1 2.
+ <_>
+
+ <_>
+ 2 2 19 3 -1.
+ <_>
+ 2 3 19 1 3.
+ <_>
+
+ <_>
+ 17 10 6 14 -1.
+ <_>
+ 20 10 3 7 2.
+ <_>
+ 17 17 3 7 2.
+ <_>
+
+ <_>
+ 1 10 6 14 -1.
+ <_>
+ 1 10 3 7 2.
+ <_>
+ 4 17 3 7 2.
+ <_>
+
+ <_>
+ 7 6 14 14 -1.
+ <_>
+ 14 6 7 7 2.
+ <_>
+ 7 13 7 7 2.
+ <_>
+
+ <_>
+ 0 12 9 6 -1.
+ <_>
+ 0 14 9 2 3.
+ <_>
+
+ <_>
+ 15 14 8 9 -1.
+ <_>
+ 15 17 8 3 3.
+ <_>
+
+ <_>
+ 1 1 22 4 -1.
+ <_>
+ 1 1 11 2 2.
+ <_>
+ 12 3 11 2 2.
+ <_>
+
+ <_>
+ 9 11 9 6 -1.
+ <_>
+ 9 13 9 2 3.
+ <_>
+
+ <_>
+ 0 15 18 3 -1.
+ <_>
+ 0 16 18 1 3.
+ <_>
+
+ <_>
+ 16 14 7 9 -1.
+ <_>
+ 16 17 7 3 3.
+ <_>
+
+ <_>
+ 4 3 16 4 -1.
+ <_>
+ 12 3 8 4 2.
+ <_>
+
+ <_>
+ 7 6 12 5 -1.
+ <_>
+ 7 6 6 5 2.
+ <_>
+
+ <_>
+ 9 6 4 9 -1.
+ <_>
+ 11 6 2 9 2.
+ <_>
+
+ <_>
+ 12 1 4 10 -1.
+ <_>
+ 12 1 2 10 2.
+ <_>
+
+ <_>
+ 8 1 4 10 -1.
+ <_>
+ 10 1 2 10 2.
+ <_>
+
+ <_>
+ 15 15 6 9 -1.
+ <_>
+ 15 18 6 3 3.
+ <_>
+
+ <_>
+ 3 15 6 9 -1.
+ <_>
+ 3 18 6 3 3.
+ <_>
+
+ <_>
+ 15 1 3 19 -1.
+ <_>
+ 16 1 1 19 3.
+ <_>
+
+ <_>
+ 1 3 6 9 -1.
+ <_>
+ 3 3 2 9 3.
+ <_>
+
+ <_>
+ 15 0 3 19 -1.
+ <_>
+ 16 0 1 19 3.
+ <_>
+
+ <_>
+ 6 3 12 4 -1.
+ <_>
+ 12 3 6 4 2.
+ <_>
+
+ <_>
+ 10 5 4 9 -1.
+ <_>
+ 10 5 2 9 2.
+ <_>
+
+ <_>
+ 6 0 3 19 -1.
+ <_>
+ 7 0 1 19 3.
+ <_>
+
+ <_>
+ 11 1 3 12 -1.
+ <_>
+ 11 7 3 6 2.
+ <_>
+
+ <_>
+ 6 7 10 5 -1.
+ <_>
+ 11 7 5 5 2.
+ <_>
+
+ <_>
+ 11 3 3 18 -1.
+ <_>
+ 12 3 1 18 3.
+ <_>
+
+ <_>
+ 9 3 6 12 -1.
+ <_>
+ 11 3 2 12 3.
+ <_>
+
+ <_>
+ 3 7 19 3 -1.
+ <_>
+ 3 8 19 1 3.
+ <_>
+
+ <_>
+ 2 7 18 3 -1.
+ <_>
+ 2 8 18 1 3.
+ <_>
+
+ <_>
+ 3 13 18 4 -1.
+ <_>
+ 12 13 9 2 2.
+ <_>
+ 3 15 9 2 2.
+ <_>
+
+ <_>
+ 3 5 6 9 -1.
+ <_>
+ 5 5 2 9 3.
+ <_>
+
+ <_>
+ 4 1 20 4 -1.
+ <_>
+ 14 1 10 2 2.
+ <_>
+ 4 3 10 2 2.
+ <_>
+
+ <_>
+ 0 1 20 4 -1.
+ <_>
+ 0 1 10 2 2.
+ <_>
+ 10 3 10 2 2.
+ <_>
+
+ <_>
+ 10 15 6 6 -1.
+ <_>
+ 10 15 3 6 2.
+ <_>
+
+ <_>
+ 0 2 24 8 -1.
+ <_>
+ 8 2 8 8 3.
+ <_>
+
+ <_>
+ 5 5 18 3 -1.
+ <_>
+ 5 6 18 1 3.
+ <_>
+
+ <_>
+ 8 15 6 6 -1.
+ <_>
+ 11 15 3 6 2.
+ <_>
+
+ <_>
+ 11 12 8 5 -1.
+ <_>
+ 11 12 4 5 2.
+ <_>
+
+ <_>
+ 5 12 8 5 -1.
+ <_>
+ 9 12 4 5 2.
+ <_>
+
+ <_>
+ 5 0 14 6 -1.
+ <_>
+ 5 2 14 2 3.
+ <_>
+
+ <_>
+ 10 2 4 15 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 10 7 5 12 -1.
+ <_>
+ 10 11 5 4 3.
+ <_>
+
+ <_>
+ 7 9 8 14 -1.
+ <_>
+ 7 9 4 7 2.
+ <_>
+ 11 16 4 7 2.
+ <_>
+
+ <_>
+ 1 5 22 6 -1.
+ <_>
+ 12 5 11 3 2.
+ <_>
+ 1 8 11 3 2.
+ <_>
+
+ <_>
+ 0 5 6 6 -1.
+ <_>
+ 0 8 6 3 2.
+ <_>
+
+ <_>
+ 12 17 9 4 -1.
+ <_>
+ 12 19 9 2 2.
+ <_>
+
+ <_>
+ 2 18 19 3 -1.
+ <_>
+ 2 19 19 1 3.
+ <_>
+
+ <_>
+ 12 17 9 4 -1.
+ <_>
+ 12 19 9 2 2.
+ <_>
+
+ <_>
+ 1 17 18 3 -1.
+ <_>
+ 1 18 18 1 3.
+ <_>
+
+ <_>
+ 12 17 9 4 -1.
+ <_>
+ 12 19 9 2 2.
+ <_>
+
+ <_>
+ 0 0 24 3 -1.
+ <_>
+ 0 1 24 1 3.
+ <_>
+
+ <_>
+ 5 0 14 4 -1.
+ <_>
+ 5 2 14 2 2.
+ <_>
+
+ <_>
+ 6 14 9 6 -1.
+ <_>
+ 6 16 9 2 3.
+ <_>
+
+ <_>
+ 14 13 6 9 -1.
+ <_>
+ 14 16 6 3 3.
+ <_>
+
+ <_>
+ 5 20 13 4 -1.
+ <_>
+ 5 22 13 2 2.
+ <_>
+
+ <_>
+ 9 9 6 12 -1.
+ <_>
+ 9 13 6 4 3.
+ <_>
+
+ <_>
+ 1 10 21 3 -1.
+ <_>
+ 8 10 7 3 3.
+ <_>
+
+ <_>
+ 8 8 9 6 -1.
+ <_>
+ 11 8 3 6 3.
+ <_>
+
+ <_>
+ 3 10 9 7 -1.
+ <_>
+ 6 10 3 7 3.
+ <_>
+
+ <_>
+ 12 10 10 8 -1.
+ <_>
+ 17 10 5 4 2.
+ <_>
+ 12 14 5 4 2.
+ <_>
+
+ <_>
+ 0 15 24 3 -1.
+ <_>
+ 8 15 8 3 3.
+ <_>
+
+ <_>
+ 8 5 9 6 -1.
+ <_>
+ 8 7 9 2 3.
+ <_>
+
+ <_>
+ 4 13 6 9 -1.
+ <_>
+ 4 16 6 3 3.
+ <_>
+
+ <_>
+ 12 17 9 4 -1.
+ <_>
+ 12 19 9 2 2.
+ <_>
+
+ <_>
+ 9 12 6 6 -1.
+ <_>
+ 9 15 6 3 2.
+ <_>
+
+ <_>
+ 9 9 14 10 -1.
+ <_>
+ 16 9 7 5 2.
+ <_>
+ 9 14 7 5 2.
+ <_>
+
+ <_>
+ 1 9 14 10 -1.
+ <_>
+ 1 9 7 5 2.
+ <_>
+ 8 14 7 5 2.
+ <_>
+
+ <_>
+ 8 7 9 17 -1.
+ <_>
+ 11 7 3 17 3.
+ <_>
+
+ <_>
+ 3 4 6 20 -1.
+ <_>
+ 3 4 3 10 2.
+ <_>
+ 6 14 3 10 2.
+ <_>
+
+ <_>
+ 7 8 10 4 -1.
+ <_>
+ 7 8 5 4 2.
+ <_>
+
+ <_>
+ 10 7 4 9 -1.
+ <_>
+ 12 7 2 9 2.
+ <_>
+
+ <_>
+ 10 15 6 9 -1.
+ <_>
+ 12 15 2 9 3.
+ <_>
+
+ <_>
+ 3 8 6 16 -1.
+ <_>
+ 3 8 3 8 2.
+ <_>
+ 6 16 3 8 2.
+ <_>
+
+ <_>
+ 12 17 9 4 -1.
+ <_>
+ 12 19 9 2 2.
+ <_>
+
+ <_>
+ 3 17 9 4 -1.
+ <_>
+ 3 19 9 2 2.
+ <_>
+
+ <_>
+ 10 1 9 6 -1.
+ <_>
+ 13 1 3 6 3.
+ <_>
+
+ <_>
+ 5 7 4 10 -1.
+ <_>
+ 5 12 4 5 2.
+ <_>
+
+ <_>
+ 7 5 12 6 -1.
+ <_>
+ 11 5 4 6 3.
+ <_>
+
+ <_>
+ 6 4 9 8 -1.
+ <_>
+ 9 4 3 8 3.
+ <_>
+
+ <_>
+ 12 16 10 8 -1.
+ <_>
+ 17 16 5 4 2.
+ <_>
+ 12 20 5 4 2.
+ <_>
+
+ <_>
+ 2 16 10 8 -1.
+ <_>
+ 2 16 5 4 2.
+ <_>
+ 7 20 5 4 2.
+ <_>
+
+ <_>
+ 0 0 24 4 -1.
+ <_>
+ 12 0 12 2 2.
+ <_>
+ 0 2 12 2 2.
+ <_>
+
+ <_>
+ 0 6 9 6 -1.
+ <_>
+ 0 8 9 2 3.
+ <_>
+
+ <_>
+ 0 4 24 6 -1.
+ <_>
+ 12 4 12 3 2.
+ <_>
+ 0 7 12 3 2.
+ <_>
+
+ <_>
+ 5 0 11 4 -1.
+ <_>
+ 5 2 11 2 2.
+ <_>
+
+ <_>
+ 1 1 22 4 -1.
+ <_>
+ 12 1 11 2 2.
+ <_>
+ 1 3 11 2 2.
+ <_>
+
+ <_>
+ 9 6 6 18 -1.
+ <_>
+ 9 15 6 9 2.
+ <_>
+
+ <_>
+ 2 9 20 4 -1.
+ <_>
+ 2 11 20 2 2.
+ <_>
+
+ <_>
+ 5 2 14 14 -1.
+ <_>
+ 5 9 14 7 2.
+ <_>
+
+ <_>
+ 4 2 16 6 -1.
+ <_>
+ 4 5 16 3 2.
+ <_>
+
+ <_>
+ 2 3 19 3 -1.
+ <_>
+ 2 4 19 1 3.
+ <_>
+
+ <_>
+ 7 1 10 4 -1.
+ <_>
+ 7 3 10 2 2.
+ <_>
+
+ <_>
+ 0 9 4 15 -1.
+ <_>
+ 0 14 4 5 3.
+ <_>
+
+ <_>
+ 2 10 21 3 -1.
+ <_>
+ 2 11 21 1 3.
+ <_>
+
+ <_>
+ 3 0 6 6 -1.
+ <_>
+ 6 0 3 6 2.
+ <_>
+
+ <_>
+ 6 4 14 9 -1.
+ <_>
+ 6 7 14 3 3.
+ <_>
+
+ <_>
+ 9 1 6 9 -1.
+ <_>
+ 11 1 2 9 3.
+ <_>
+
+ <_>
+ 15 8 9 9 -1.
+ <_>
+ 15 11 9 3 3.
+ <_>
+
+ <_>
+ 8 0 4 21 -1.
+ <_>
+ 8 7 4 7 3.
+ <_>
+
+ <_>
+ 3 22 19 2 -1.
+ <_>
+ 3 23 19 1 2.
+ <_>
+
+ <_>
+ 2 15 20 3 -1.
+ <_>
+ 2 16 20 1 3.
+ <_>
+
+ <_>
+ 19 0 4 13 -1.
+ <_>
+ 19 0 2 13 2.
+ <_>
+
+ <_>
+ 1 7 8 8 -1.
+ <_>
+ 1 11 8 4 2.
+ <_>
+
+ <_>
+ 14 14 6 9 -1.
+ <_>
+ 14 17 6 3 3.
+ <_>
+
+ <_>
+ 4 14 6 9 -1.
+ <_>
+ 4 17 6 3 3.
+ <_>
+
+ <_>
+ 14 5 4 10 -1.
+ <_>
+ 14 5 2 10 2.
+ <_>
+
+ <_>
+ 6 5 4 10 -1.
+ <_>
+ 8 5 2 10 2.
+ <_>
+
+ <_>
+ 14 5 6 6 -1.
+ <_>
+ 14 8 6 3 2.
+ <_>
+
+ <_>
+ 4 5 6 6 -1.
+ <_>
+ 4 8 6 3 2.
+ <_>
+
+ <_>
+ 0 2 24 21 -1.
+ <_>
+ 8 2 8 21 3.
+ <_>
+
+ <_>
+ 1 2 6 13 -1.
+ <_>
+ 3 2 2 13 3.
+ <_>
+
+ <_>
+ 20 0 4 21 -1.
+ <_>
+ 20 0 2 21 2.
+ <_>
+
+ <_>
+ 0 4 4 20 -1.
+ <_>
+ 2 4 2 20 2.
+ <_>
+
+ <_>
+ 8 16 9 6 -1.
+ <_>
+ 8 18 9 2 3.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 16 12 7 9 -1.
+ <_>
+ 16 15 7 3 3.
+ <_>
+
+ <_>
+ 5 21 14 3 -1.
+ <_>
+ 12 21 7 3 2.
+ <_>
+
+ <_>
+ 11 5 6 9 -1.
+ <_>
+ 11 5 3 9 2.
+ <_>
+
+ <_>
+ 10 5 4 10 -1.
+ <_>
+ 12 5 2 10 2.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 7 5 6 9 -1.
+ <_>
+ 10 5 3 9 2.
+ <_>
+
+ <_>
+ 14 14 10 4 -1.
+ <_>
+ 14 16 10 2 2.
+ <_>
+
+ <_>
+ 5 5 14 14 -1.
+ <_>
+ 5 5 7 7 2.
+ <_>
+ 12 12 7 7 2.
+ <_>
+
+ <_>
+ 12 8 12 6 -1.
+ <_>
+ 18 8 6 3 2.
+ <_>
+ 12 11 6 3 2.
+ <_>
+
+ <_>
+ 6 6 12 12 -1.
+ <_>
+ 6 6 6 6 2.
+ <_>
+ 12 12 6 6 2.
+ <_>
+
+ <_>
+ 11 13 6 10 -1.
+ <_>
+ 13 13 2 10 3.
+ <_>
+
+ <_>
+ 1 10 20 8 -1.
+ <_>
+ 1 10 10 4 2.
+ <_>
+ 11 14 10 4 2.
+ <_>
+
+ <_>
+ 15 13 9 6 -1.
+ <_>
+ 15 15 9 2 3.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 9 3 6 3 3.
+ <_>
+
+ <_>
+ 10 1 5 14 -1.
+ <_>
+ 10 8 5 7 2.
+ <_>
+
+ <_>
+ 3 4 16 6 -1.
+ <_>
+ 3 6 16 2 3.
+ <_>
+
+ <_>
+ 16 3 8 9 -1.
+ <_>
+ 16 6 8 3 3.
+ <_>
+
+ <_>
+ 7 13 6 10 -1.
+ <_>
+ 9 13 2 10 3.
+ <_>
+
+ <_>
+ 15 13 9 6 -1.
+ <_>
+ 15 15 9 2 3.
+ <_>
+
+ <_>
+ 0 13 9 6 -1.
+ <_>
+ 0 15 9 2 3.
+ <_>
+
+ <_>
+ 13 16 9 6 -1.
+ <_>
+ 13 18 9 2 3.
+ <_>
+
+ <_>
+ 2 16 9 6 -1.
+ <_>
+ 2 18 9 2 3.
+ <_>
+
+ <_>
+ 5 16 18 3 -1.
+ <_>
+ 5 17 18 1 3.
+ <_>
+
+ <_>
+ 1 16 18 3 -1.
+ <_>
+ 1 17 18 1 3.
+ <_>
+
+ <_>
+ 5 0 18 3 -1.
+ <_>
+ 5 1 18 1 3.
+ <_>
+
+ <_>
+ 1 1 19 2 -1.
+ <_>
+ 1 2 19 1 2.
+ <_>
+
+ <_>
+ 14 2 6 11 -1.
+ <_>
+ 16 2 2 11 3.
+ <_>
+
+ <_>
+ 4 15 15 6 -1.
+ <_>
+ 9 15 5 6 3.
+ <_>
+
+ <_>
+ 14 2 6 11 -1.
+ <_>
+ 16 2 2 11 3.
+ <_>
+
+ <_>
+ 4 2 6 11 -1.
+ <_>
+ 6 2 2 11 3.
+ <_>
+
+ <_>
+ 18 2 6 9 -1.
+ <_>
+ 18 5 6 3 3.
+ <_>
+
+ <_>
+ 1 2 22 4 -1.
+ <_>
+ 1 2 11 2 2.
+ <_>
+ 12 4 11 2 2.
+ <_>
+
+ <_>
+ 2 0 21 12 -1.
+ <_>
+ 9 0 7 12 3.
+ <_>
+
+ <_>
+ 0 12 18 3 -1.
+ <_>
+ 0 13 18 1 3.
+ <_>
+
+ <_>
+ 12 2 6 9 -1.
+ <_>
+ 14 2 2 9 3.
+ <_>
+
+ <_>
+ 3 10 18 3 -1.
+ <_>
+ 3 11 18 1 3.
+ <_>
+
+ <_>
+ 16 3 8 9 -1.
+ <_>
+ 16 6 8 3 3.
+ <_>
+
+ <_>
+ 3 7 18 3 -1.
+ <_>
+ 3 8 18 1 3.
+ <_>
+
+ <_>
+ 9 11 6 9 -1.
+ <_>
+ 11 11 2 9 3.
+ <_>
+
+ <_>
+ 9 8 6 9 -1.
+ <_>
+ 11 8 2 9 3.
+ <_>
+
+ <_>
+ 15 0 2 18 -1.
+ <_>
+ 15 0 1 18 2.
+ <_>
+
+ <_>
+ 7 0 2 18 -1.
+ <_>
+ 8 0 1 18 2.
+ <_>
+
+ <_>
+ 17 3 7 9 -1.
+ <_>
+ 17 6 7 3 3.
+ <_>
+
+ <_>
+ 3 18 9 6 -1.
+ <_>
+ 3 20 9 2 3.
+ <_>
+
+ <_>
+ 3 18 21 3 -1.
+ <_>
+ 3 19 21 1 3.
+ <_>
+
+ <_>
+ 0 3 7 9 -1.
+ <_>
+ 0 6 7 3 3.
+ <_>
+
+ <_>
+ 2 7 22 3 -1.
+ <_>
+ 2 8 22 1 3.
+ <_>
+
+ <_>
+ 0 3 24 16 -1.
+ <_>
+ 0 3 12 8 2.
+ <_>
+ 12 11 12 8 2.
+ <_>
+
+ <_>
+ 13 17 9 4 -1.
+ <_>
+ 13 19 9 2 2.
+ <_>
+
+ <_>
+ 5 5 12 8 -1.
+ <_>
+ 5 5 6 4 2.
+ <_>
+ 11 9 6 4 2.
+ <_>
+
+ <_>
+ 5 6 14 6 -1.
+ <_>
+ 12 6 7 3 2.
+ <_>
+ 5 9 7 3 2.
+ <_>
+
+ <_>
+ 5 16 14 6 -1.
+ <_>
+ 5 16 7 3 2.
+ <_>
+ 12 19 7 3 2.
+ <_>
+
+ <_>
+ 18 2 6 9 -1.
+ <_>
+ 18 5 6 3 3.
+ <_>
+
+ <_>
+ 0 2 6 9 -1.
+ <_>
+ 0 5 6 3 3.
+ <_>
+
+ <_>
+ 3 4 20 10 -1.
+ <_>
+ 13 4 10 5 2.
+ <_>
+ 3 9 10 5 2.
+ <_>
+
+ <_>
+ 2 13 9 8 -1.
+ <_>
+ 5 13 3 8 3.
+ <_>
+
+ <_>
+ 2 1 21 15 -1.
+ <_>
+ 9 1 7 15 3.
+ <_>
+
+ <_>
+ 5 12 14 8 -1.
+ <_>
+ 12 12 7 8 2.
+ <_>
+
+ <_>
+ 6 7 12 4 -1.
+ <_>
+ 6 7 6 4 2.
+ <_>
+
+ <_>
+ 6 5 9 6 -1.
+ <_>
+ 9 5 3 6 3.
+ <_>
+
+ <_>
+ 13 11 6 6 -1.
+ <_>
+ 13 11 3 6 2.
+ <_>
+
+ <_>
+ 5 11 6 6 -1.
+ <_>
+ 8 11 3 6 2.
+ <_>
+
+ <_>
+ 6 4 18 2 -1.
+ <_>
+ 6 5 18 1 2.
+ <_>
+
+ <_>
+ 0 2 6 11 -1.
+ <_>
+ 2 2 2 11 3.
+ <_>
+
+ <_>
+ 18 0 6 15 -1.
+ <_>
+ 20 0 2 15 3.
+ <_>
+
+ <_>
+ 0 0 6 13 -1.
+ <_>
+ 2 0 2 13 3.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 6 0 6 9 -1.
+ <_>
+ 8 0 2 9 3.
+ <_>
+
+ <_>
+ 0 2 24 4 -1.
+ <_>
+ 8 2 8 4 3.
+ <_>
+
+ <_>
+ 3 13 18 4 -1.
+ <_>
+ 12 13 9 4 2.
+ <_>
+
+ <_>
+ 9 7 10 4 -1.
+ <_>
+ 9 7 5 4 2.
+ <_>
+
+ <_>
+ 5 8 12 3 -1.
+ <_>
+ 11 8 6 3 2.
+ <_>
+
+ <_>
+ 4 14 19 3 -1.
+ <_>
+ 4 15 19 1 3.
+ <_>
+
+ <_>
+ 10 0 4 20 -1.
+ <_>
+ 10 10 4 10 2.
+ <_>
+
+ <_>
+ 8 15 9 6 -1.
+ <_>
+ 8 17 9 2 3.
+ <_>
+
+ <_>
+ 2 9 15 4 -1.
+ <_>
+ 7 9 5 4 3.
+ <_>
+
+ <_>
+ 8 4 12 7 -1.
+ <_>
+ 12 4 4 7 3.
+ <_>
+
+ <_>
+ 0 10 6 9 -1.
+ <_>
+ 0 13 6 3 3.
+ <_>
+
+ <_>
+ 18 5 6 9 -1.
+ <_>
+ 18 8 6 3 3.
+ <_>
+
+ <_>
+ 0 18 16 6 -1.
+ <_>
+ 0 18 8 3 2.
+ <_>
+ 8 21 8 3 2.
+ <_>
+
+ <_>
+ 9 18 14 6 -1.
+ <_>
+ 16 18 7 3 2.
+ <_>
+ 9 21 7 3 2.
+ <_>
+
+ <_>
+ 1 20 20 4 -1.
+ <_>
+ 1 20 10 2 2.
+ <_>
+ 11 22 10 2 2.
+ <_>
+
+ <_>
+ 2 8 20 6 -1.
+ <_>
+ 12 8 10 3 2.
+ <_>
+ 2 11 10 3 2.
+ <_>
+
+ <_>
+ 7 8 6 9 -1.
+ <_>
+ 9 8 2 9 3.
+ <_>
+
+ <_>
+ 8 5 12 8 -1.
+ <_>
+ 12 5 4 8 3.
+ <_>
+
+ <_>
+ 4 5 12 8 -1.
+ <_>
+ 8 5 4 8 3.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 2 0 6 16 -1.
+ <_>
+ 4 0 2 16 3.
+ <_>
+
+ <_>
+ 15 4 6 12 -1.
+ <_>
+ 15 8 6 4 3.
+ <_>
+
+ <_>
+ 3 4 6 12 -1.
+ <_>
+ 3 8 6 4 3.
+ <_>
+
+ <_>
+ 15 12 9 6 -1.
+ <_>
+ 15 14 9 2 3.
+ <_>
+
+ <_>
+ 4 0 15 22 -1.
+ <_>
+ 4 11 15 11 2.
+ <_>
+
+ <_>
+ 15 12 9 6 -1.
+ <_>
+ 15 14 9 2 3.
+ <_>
+
+ <_>
+ 0 12 9 6 -1.
+ <_>
+ 0 14 9 2 3.
+ <_>
+
+ <_>
+ 15 15 9 6 -1.
+ <_>
+ 15 17 9 2 3.
+ <_>
+
+ <_>
+ 0 15 9 6 -1.
+ <_>
+ 0 17 9 2 3.
+ <_>
+
+ <_>
+ 10 0 8 10 -1.
+ <_>
+ 14 0 4 5 2.
+ <_>
+ 10 5 4 5 2.
+ <_>
+
+ <_>
+ 1 0 4 16 -1.
+ <_>
+ 3 0 2 16 2.
+ <_>
+
+ <_>
+ 7 6 10 6 -1.
+ <_>
+ 7 8 10 2 3.
+ <_>
+
+ <_>
+ 10 12 4 10 -1.
+ <_>
+ 10 17 4 5 2.
+ <_>
+
+ <_>
+ 8 4 10 6 -1.
+ <_>
+ 8 6 10 2 3.
+ <_>
+
+ <_>
+ 3 22 18 2 -1.
+ <_>
+ 12 22 9 2 2.
+ <_>
+
+ <_>
+ 7 7 11 6 -1.
+ <_>
+ 7 9 11 2 3.
+ <_>
+
+ <_>
+ 0 0 12 10 -1.
+ <_>
+ 0 0 6 5 2.
+ <_>
+ 6 5 6 5 2.
+ <_>
+
+ <_>
+ 10 1 12 6 -1.
+ <_>
+ 16 1 6 3 2.
+ <_>
+ 10 4 6 3 2.
+ <_>
+
+ <_>
+ 7 16 9 4 -1.
+ <_>
+ 7 18 9 2 2.
+ <_>
+
+ <_>
+ 5 7 15 16 -1.
+ <_>
+ 10 7 5 16 3.
+ <_>
+
+ <_>
+ 5 10 12 13 -1.
+ <_>
+ 11 10 6 13 2.
+ <_>
+
+ <_>
+ 6 2 12 6 -1.
+ <_>
+ 12 2 6 3 2.
+ <_>
+ 6 5 6 3 2.
+ <_>
+
+ <_>
+ 3 9 12 9 -1.
+ <_>
+ 3 12 12 3 3.
+ <_>
+
+ <_>
+ 16 2 8 6 -1.
+ <_>
+ 16 5 8 3 2.
+ <_>
+
+ <_>
+ 0 2 8 6 -1.
+ <_>
+ 0 5 8 3 2.
+ <_>
+
+ <_>
+ 0 3 24 11 -1.
+ <_>
+ 0 3 12 11 2.
+ <_>
+
+ <_>
+ 0 13 8 10 -1.
+ <_>
+ 0 13 4 5 2.
+ <_>
+ 4 18 4 5 2.
+ <_>
+
+ <_>
+ 10 14 4 10 -1.
+ <_>
+ 10 19 4 5 2.
+ <_>
+
+ <_>
+ 10 2 4 21 -1.
+ <_>
+ 10 9 4 7 3.
+ <_>
+
+ <_>
+ 4 4 15 9 -1.
+ <_>
+ 4 7 15 3 3.
+ <_>
+
+ <_>
+ 0 1 24 6 -1.
+ <_>
+ 8 1 8 6 3.
+ <_>
+
+ <_>
+ 9 6 5 16 -1.
+ <_>
+ 9 14 5 8 2.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 9 21 6 3 3.
+ <_>
+
+ <_>
+ 6 5 3 12 -1.
+ <_>
+ 6 11 3 6 2.
+ <_>
+
+ <_>
+ 11 6 4 9 -1.
+ <_>
+ 11 6 2 9 2.
+ <_>
+
+ <_>
+ 5 6 9 8 -1.
+ <_>
+ 8 6 3 8 3.
+ <_>
+
+ <_>
+ 4 3 20 2 -1.
+ <_>
+ 4 4 20 1 2.
+ <_>
+
+ <_>
+ 2 10 18 3 -1.
+ <_>
+ 8 10 6 3 3.
+ <_>
+
+ <_>
+ 7 15 10 6 -1.
+ <_>
+ 7 17 10 2 3.
+ <_>
+
+ <_>
+ 1 4 4 18 -1.
+ <_>
+ 1 4 2 9 2.
+ <_>
+ 3 13 2 9 2.
+ <_>
+
+ <_>
+ 13 0 6 9 -1.
+ <_>
+ 15 0 2 9 3.
+ <_>
+
+ <_>
+ 5 0 6 9 -1.
+ <_>
+ 7 0 2 9 3.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 6 7 9 6 -1.
+ <_>
+ 9 7 3 6 3.
+ <_>
+
+ <_>
+ 3 0 18 2 -1.
+ <_>
+ 3 1 18 1 2.
+ <_>
+
+ <_>
+ 0 10 20 4 -1.
+ <_>
+ 0 10 10 2 2.
+ <_>
+ 10 12 10 2 2.
+ <_>
+
+ <_>
+ 10 2 4 12 -1.
+ <_>
+ 10 8 4 6 2.
+ <_>
+
+ <_>
+ 6 5 6 12 -1.
+ <_>
+ 6 5 3 6 2.
+ <_>
+ 9 11 3 6 2.
+ <_>
+
+ <_>
+ 6 0 18 22 -1.
+ <_>
+ 15 0 9 11 2.
+ <_>
+ 6 11 9 11 2.
+ <_>
+
+ <_>
+ 0 0 18 22 -1.
+ <_>
+ 0 0 9 11 2.
+ <_>
+ 9 11 9 11 2.
+ <_>
+
+ <_>
+ 18 2 6 11 -1.
+ <_>
+ 20 2 2 11 3.
+ <_>
+
+ <_>
+ 0 2 6 11 -1.
+ <_>
+ 2 2 2 11 3.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 0 0 20 3 -1.
+ <_>
+ 0 1 20 1 3.
+ <_>
+
+ <_>
+ 2 2 20 2 -1.
+ <_>
+ 2 3 20 1 2.
+ <_>
+
+ <_>
+ 1 10 18 2 -1.
+ <_>
+ 1 11 18 1 2.
+ <_>
+
+ <_>
+ 18 7 6 9 -1.
+ <_>
+ 18 10 6 3 3.
+ <_>
+
+ <_>
+ 0 0 22 9 -1.
+ <_>
+ 0 3 22 3 3.
+ <_>
+
+ <_>
+ 17 3 6 9 -1.
+ <_>
+ 17 6 6 3 3.
+ <_>
+
+ <_>
+ 0 7 6 9 -1.
+ <_>
+ 0 10 6 3 3.
+ <_>
+
+ <_>
+ 0 6 24 6 -1.
+ <_>
+ 0 8 24 2 3.
+ <_>
+
+ <_>
+ 0 2 6 10 -1.
+ <_>
+ 2 2 2 10 3.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 15 0 6 9 -1.
+ <_>
+ 17 0 2 9 3.
+ <_>
+
+ <_>
+ 3 0 6 9 -1.
+ <_>
+ 5 0 2 9 3.
+ <_>
+
+ <_>
+ 15 17 9 6 -1.
+ <_>
+ 15 19 9 2 3.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 15 14 9 6 -1.
+ <_>
+ 15 16 9 2 3.
+ <_>
+
+ <_>
+ 0 15 23 6 -1.
+ <_>
+ 0 17 23 2 3.
+ <_>
+
+ <_>
+ 5 15 18 3 -1.
+ <_>
+ 5 16 18 1 3.
+ <_>
+
+ <_>
+ 0 14 9 6 -1.
+ <_>
+ 0 16 9 2 3.
+ <_>
+
+ <_>
+ 9 8 8 10 -1.
+ <_>
+ 13 8 4 5 2.
+ <_>
+ 9 13 4 5 2.
+ <_>
+
+ <_>
+ 3 7 15 6 -1.
+ <_>
+ 8 7 5 6 3.
+ <_>
+
+ <_>
+ 9 8 8 10 -1.
+ <_>
+ 13 8 4 5 2.
+ <_>
+ 9 13 4 5 2.
+ <_>
+
+ <_>
+ 5 0 6 12 -1.
+ <_>
+ 8 0 3 12 2.
+ <_>
+
+ <_>
+ 9 8 8 10 -1.
+ <_>
+ 13 8 4 5 2.
+ <_>
+ 9 13 4 5 2.
+ <_>
+
+ <_>
+ 8 5 6 9 -1.
+ <_>
+ 10 5 2 9 3.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+ 10 15 2 9 2.
+ <_>
+
+ <_>
+ 5 7 12 4 -1.
+ <_>
+ 11 7 6 4 2.
+ <_>
+
+ <_>
+ 9 8 8 10 -1.
+ <_>
+ 13 8 4 5 2.
+ <_>
+ 9 13 4 5 2.
+ <_>
+
+ <_>
+ 7 8 8 10 -1.
+ <_>
+ 7 8 4 5 2.
+ <_>
+ 11 13 4 5 2.
+ <_>
+
+ <_>
+ 11 10 6 14 -1.
+ <_>
+ 14 10 3 7 2.
+ <_>
+ 11 17 3 7 2.
+ <_>
+
+ <_>
+ 9 5 6 19 -1.
+ <_>
+ 12 5 3 19 2.
+ <_>
+
+ <_>
+ 6 12 12 6 -1.
+ <_>
+ 12 12 6 3 2.
+ <_>
+ 6 15 6 3 2.
+ <_>
+
+ <_>
+ 1 9 18 6 -1.
+ <_>
+ 1 9 9 3 2.
+ <_>
+ 10 12 9 3 2.
+ <_>
+
+ <_>
+ 16 14 8 10 -1.
+ <_>
+ 20 14 4 5 2.
+ <_>
+ 16 19 4 5 2.
+ <_>
+
+ <_>
+ 0 9 22 8 -1.
+ <_>
+ 0 9 11 4 2.
+ <_>
+ 11 13 11 4 2.
+ <_>
+
+ <_>
+ 8 18 12 6 -1.
+ <_>
+ 14 18 6 3 2.
+ <_>
+ 8 21 6 3 2.
+ <_>
+
+ <_>
+ 0 6 20 18 -1.
+ <_>
+ 0 6 10 9 2.
+ <_>
+ 10 15 10 9 2.
+ <_>
+
+ <_>
+ 3 6 20 12 -1.
+ <_>
+ 13 6 10 6 2.
+ <_>
+ 3 12 10 6 2.
+ <_>
+
+ <_>
+ 0 16 10 8 -1.
+ <_>
+ 0 16 5 4 2.
+ <_>
+ 5 20 5 4 2.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 0 11 19 3 -1.
+ <_>
+ 0 12 19 1 3.
+ <_>
+
+ <_>
+ 14 6 6 9 -1.
+ <_>
+ 14 9 6 3 3.
+ <_>
+
+ <_>
+ 1 7 22 4 -1.
+ <_>
+ 1 7 11 2 2.
+ <_>
+ 12 9 11 2 2.
+ <_>
+
+ <_>
+ 13 6 7 12 -1.
+ <_>
+ 13 10 7 4 3.
+ <_>
+
+ <_>
+ 4 7 11 9 -1.
+ <_>
+ 4 10 11 3 3.
+ <_>
+
+ <_>
+ 12 10 10 8 -1.
+ <_>
+ 17 10 5 4 2.
+ <_>
+ 12 14 5 4 2.
+ <_>
+
+ <_>
+ 2 12 9 7 -1.
+ <_>
+ 5 12 3 7 3.
+ <_>
+
+ <_>
+ 16 14 6 9 -1.
+ <_>
+ 16 17 6 3 3.
+ <_>
+
+ <_>
+ 3 12 6 12 -1.
+ <_>
+ 3 16 6 4 3.
+ <_>
+
+ <_>
+ 14 13 6 6 -1.
+ <_>
+ 14 16 6 3 2.
+ <_>
+
+ <_>
+ 8 0 6 9 -1.
+ <_>
+ 10 0 2 9 3.
+ <_>
+
+ <_>
+ 9 1 6 23 -1.
+ <_>
+ 11 1 2 23 3.
+ <_>
+
+ <_>
+ 0 16 9 6 -1.
+ <_>
+ 0 18 9 2 3.
+ <_>
+
+ <_>
+ 4 17 18 3 -1.
+ <_>
+ 4 18 18 1 3.
+ <_>
+
+ <_>
+ 5 2 13 14 -1.
+ <_>
+ 5 9 13 7 2.
+ <_>
+
+ <_>
+ 15 0 8 12 -1.
+ <_>
+ 19 0 4 6 2.
+ <_>
+ 15 6 4 6 2.
+ <_>
+
+ <_>
+ 0 0 8 12 -1.
+ <_>
+ 0 0 4 6 2.
+ <_>
+ 4 6 4 6 2.
+ <_>
+
+ <_>
+ 8 2 8 7 -1.
+ <_>
+ 8 2 4 7 2.
+ <_>
+
+ <_>
+ 1 1 6 9 -1.
+ <_>
+ 3 1 2 9 3.
+ <_>
+
+ <_>
+ 14 8 6 12 -1.
+ <_>
+ 17 8 3 6 2.
+ <_>
+ 14 14 3 6 2.
+ <_>
+
+ <_>
+ 4 8 6 12 -1.
+ <_>
+ 4 8 3 6 2.
+ <_>
+ 7 14 3 6 2.
+ <_>
+
+ <_>
+ 16 5 5 15 -1.
+ <_>
+ 16 10 5 5 3.
+ <_>
+
+ <_>
+ 3 5 5 15 -1.
+ <_>
+ 3 10 5 5 3.
+ <_>
+
+ <_>
+ 18 4 6 9 -1.
+ <_>
+ 18 7 6 3 3.
+ <_>
+
+ <_>
+ 1 7 6 15 -1.
+ <_>
+ 1 12 6 5 3.
+ <_>
+
+ <_>
+ 11 15 12 8 -1.
+ <_>
+ 17 15 6 4 2.
+ <_>
+ 11 19 6 4 2.
+ <_>
+
+ <_>
+ 0 2 24 4 -1.
+ <_>
+ 0 2 12 2 2.
+ <_>
+ 12 4 12 2 2.
+ <_>
+
+ <_>
+ 15 1 2 19 -1.
+ <_>
+ 15 1 1 19 2.
+ <_>
+
+ <_>
+ 7 1 2 19 -1.
+ <_>
+ 8 1 1 19 2.
+ <_>
+
+ <_>
+ 22 1 2 20 -1.
+ <_>
+ 22 1 1 20 2.
+ <_>
+
+ <_>
+ 0 1 2 20 -1.
+ <_>
+ 1 1 1 20 2.
+ <_>
+
+ <_>
+ 18 11 6 12 -1.
+ <_>
+ 20 11 2 12 3.
+ <_>
+
+ <_>
+ 0 11 6 12 -1.
+ <_>
+ 2 11 2 12 3.
+ <_>
+
+ <_>
+ 3 6 18 14 -1.
+ <_>
+ 3 13 18 7 2.
+ <_>
+
+ <_>
+ 6 10 7 8 -1.
+ <_>
+ 6 14 7 4 2.
+ <_>
+
+ <_>
+ 7 9 12 12 -1.
+ <_>
+ 7 13 12 4 3.
+ <_>
+
+ <_>
+ 2 18 18 5 -1.
+ <_>
+ 11 18 9 5 2.
+ <_>
+
+ <_>
+ 4 21 20 3 -1.
+ <_>
+ 4 22 20 1 3.
+ <_>
+
+ <_>
+ 9 12 6 12 -1.
+ <_>
+ 9 12 3 6 2.
+ <_>
+ 12 18 3 6 2.
+ <_>
+
+ <_>
+ 4 6 18 3 -1.
+ <_>
+ 4 7 18 1 3.
+ <_>
+
+ <_>
+ 3 6 18 3 -1.
+ <_>
+ 3 7 18 1 3.
+ <_>
+
+ <_>
+ 18 4 6 9 -1.
+ <_>
+ 18 7 6 3 3.
+ <_>
+
+ <_>
+ 2 12 9 6 -1.
+ <_>
+ 2 14 9 2 3.
+ <_>
+
+ <_>
+ 4 14 18 4 -1.
+ <_>
+ 13 14 9 2 2.
+ <_>
+ 4 16 9 2 2.
+ <_>
+
+ <_>
+ 7 7 6 14 -1.
+ <_>
+ 7 7 3 7 2.
+ <_>
+ 10 14 3 7 2.
+ <_>
+
+ <_>
+ 7 13 12 6 -1.
+ <_>
+ 13 13 6 3 2.
+ <_>
+ 7 16 6 3 2.
+ <_>
+
+ <_>
+ 6 7 12 9 -1.
+ <_>
+ 10 7 4 9 3.
+ <_>
+
+ <_>
+ 12 12 6 6 -1.
+ <_>
+ 12 12 3 6 2.
+ <_>
+
+ <_>
+ 0 2 4 10 -1.
+ <_>
+ 0 7 4 5 2.
+ <_>
+
+ <_>
+ 8 0 9 6 -1.
+ <_>
+ 11 0 3 6 3.
+ <_>
+
+ <_>
+ 2 9 12 6 -1.
+ <_>
+ 2 12 12 3 2.
+ <_>
+
+ <_>
+ 13 10 6 9 -1.
+ <_>
+ 13 13 6 3 3.
+ <_>
+
+ <_>
+ 5 10 6 9 -1.
+ <_>
+ 5 13 6 3 3.
+ <_>
+
+ <_>
+ 9 15 9 6 -1.
+ <_>
+ 9 17 9 2 3.
+ <_>
+
+ <_>
+ 5 16 12 6 -1.
+ <_>
+ 5 19 12 3 2.
+ <_>
+
+ <_>
+ 3 2 20 3 -1.
+ <_>
+ 3 3 20 1 3.
+ <_>
+
+ <_>
+ 2 5 12 6 -1.
+ <_>
+ 6 5 4 6 3.
+ <_>
+
+ <_>
+ 11 0 3 24 -1.
+ <_>
+ 12 0 1 24 3.
+ <_>
+
+ <_>
+ 3 16 15 4 -1.
+ <_>
+ 8 16 5 4 3.
+ <_>
+
+ <_>
+ 9 12 6 12 -1.
+ <_>
+ 9 18 6 6 2.
+ <_>
+
+ <_>
+ 1 15 12 8 -1.
+ <_>
+ 1 15 6 4 2.
+ <_>
+ 7 19 6 4 2.
+ <_>
+
+ <_>
+ 15 10 8 14 -1.
+ <_>
+ 19 10 4 7 2.
+ <_>
+ 15 17 4 7 2.
+ <_>
+
+ <_>
+ 1 9 8 14 -1.
+ <_>
+ 1 9 4 7 2.
+ <_>
+ 5 16 4 7 2.
+ <_>
+
+ <_>
+ 9 11 9 10 -1.
+ <_>
+ 9 16 9 5 2.
+ <_>
+
+ <_>
+ 6 7 12 6 -1.
+ <_>
+ 6 9 12 2 3.
+ <_>
+
+ <_>
+ 10 15 6 9 -1.
+ <_>
+ 12 15 2 9 3.
+ <_>
+
+ <_>
+ 7 8 9 7 -1.
+ <_>
+ 10 8 3 7 3.
+ <_>
+
+ <_>
+ 10 4 8 10 -1.
+ <_>
+ 14 4 4 5 2.
+ <_>
+ 10 9 4 5 2.
+ <_>
+
+ <_>
+ 4 6 6 9 -1.
+ <_>
+ 4 9 6 3 3.
+ <_>
+
+ <_>
+ 0 6 24 12 -1.
+ <_>
+ 8 6 8 12 3.
+ <_>
+
+ <_>
+ 3 7 6 14 -1.
+ <_>
+ 6 7 3 14 2.
+ <_>
+
+ <_>
+ 19 8 5 8 -1.
+ <_>
+ 19 12 5 4 2.
+ <_>
+
+ <_>
+ 0 8 5 8 -1.
+ <_>
+ 0 12 5 4 2.
+ <_>
+
+ <_>
+ 17 3 6 6 -1.
+ <_>
+ 17 6 6 3 2.
+ <_>
+
+ <_>
+ 1 3 6 6 -1.
+ <_>
+ 1 6 6 3 2.
+ <_>
+
+ <_>
+ 18 2 6 9 -1.
+ <_>
+ 18 5 6 3 3.
+ <_>
+
+ <_>
+ 0 2 6 9 -1.
+ <_>
+ 0 5 6 3 3.
+ <_>
+
+ <_>
+ 3 3 18 6 -1.
+ <_>
+ 3 5 18 2 3.
+ <_>
+
+ <_>
+ 2 3 9 6 -1.
+ <_>
+ 2 5 9 2 3.
+ <_>
+
+ <_>
+ 9 3 10 8 -1.
+ <_>
+ 14 3 5 4 2.
+ <_>
+ 9 7 5 4 2.
+ <_>
+
+ <_>
+ 5 3 10 8 -1.
+ <_>
+ 5 3 5 4 2.
+ <_>
+ 10 7 5 4 2.
+ <_>
+
+ <_>
+ 10 11 6 12 -1.
+ <_>
+ 10 11 3 12 2.
+ <_>
+
+ <_>
+ 8 11 6 11 -1.
+ <_>
+ 11 11 3 11 2.
+ <_>
+
+ <_>
+ 7 8 10 4 -1.
+ <_>
+ 7 8 5 4 2.
+ <_>
+
+ <_>
+ 9 6 6 7 -1.
+ <_>
+ 12 6 3 7 2.
+ <_>
+
+ <_>
+ 5 18 18 3 -1.
+ <_>
+ 5 19 18 1 3.
+ <_>
+
+ <_>
+ 8 4 6 9 -1.
+ <_>
+ 10 4 2 9 3.
+ <_>
+
+ <_>
+ 8 1 9 7 -1.
+ <_>
+ 11 1 3 7 3.
+ <_>
+
+ <_>
+ 6 11 6 6 -1.
+ <_>
+ 9 11 3 6 2.
+ <_>
+
+ <_>
+ 14 12 4 11 -1.
+ <_>
+ 14 12 2 11 2.
+ <_>
+
+ <_>
+ 6 12 4 11 -1.
+ <_>
+ 8 12 2 11 2.
+ <_>
+
+ <_>
+ 8 0 12 18 -1.
+ <_>
+ 12 0 4 18 3.
+ <_>
+
+ <_>
+ 2 12 10 5 -1.
+ <_>
+ 7 12 5 5 2.
+ <_>
+
+ <_>
+ 2 20 22 3 -1.
+ <_>
+ 2 21 22 1 3.
+ <_>
+
+ <_>
+ 0 4 2 20 -1.
+ <_>
+ 1 4 1 20 2.
+ <_>
+
+ <_>
+ 0 2 24 4 -1.
+ <_>
+ 8 2 8 4 3.
+ <_>
+
+ <_>
+ 7 8 10 4 -1.
+ <_>
+ 7 10 10 2 2.
+ <_>
+
+ <_>
+ 6 7 8 10 -1.
+ <_>
+ 6 7 4 5 2.
+ <_>
+ 10 12 4 5 2.
+ <_>
+
+ <_>
+ 14 0 6 14 -1.
+ <_>
+ 17 0 3 7 2.
+ <_>
+ 14 7 3 7 2.
+ <_>
+
+ <_>
+ 4 11 5 8 -1.
+ <_>
+ 4 15 5 4 2.
+ <_>
+
+ <_>
+ 2 0 20 9 -1.
+ <_>
+ 2 3 20 3 3.
+ <_>
+
+ <_>
+ 6 7 12 8 -1.
+ <_>
+ 6 7 6 4 2.
+ <_>
+ 12 11 6 4 2.
+ <_>
+
+ <_>
+ 9 17 6 6 -1.
+ <_>
+ 9 20 6 3 2.
+ <_>
+
+ <_>
+ 7 10 10 4 -1.
+ <_>
+ 7 12 10 2 2.
+ <_>
+
+ <_>
+ 6 5 12 9 -1.
+ <_>
+ 10 5 4 9 3.
+ <_>
+
+ <_>
+ 5 11 6 8 -1.
+ <_>
+ 8 11 3 8 2.
+ <_>
+
+ <_>
+ 18 4 4 17 -1.
+ <_>
+ 18 4 2 17 2.
+ <_>
+
+ <_>
+ 0 0 6 6 -1.
+ <_>
+ 3 0 3 6 2.
+ <_>
+
+ <_>
+ 18 4 4 17 -1.
+ <_>
+ 18 4 2 17 2.
+ <_>
+
+ <_>
+ 2 4 4 17 -1.
+ <_>
+ 4 4 2 17 2.
+ <_>
+
+ <_>
+ 5 18 19 3 -1.
+ <_>
+ 5 19 19 1 3.
+ <_>
+
+ <_>
+ 11 0 2 18 -1.
+ <_>
+ 11 9 2 9 2.
+ <_>
+
+ <_>
+ 15 4 2 18 -1.
+ <_>
+ 15 13 2 9 2.
+ <_>
+
+ <_>
+ 7 4 2 18 -1.
+ <_>
+ 7 13 2 9 2.
+ <_>
+
+ <_>
+ 7 11 10 8 -1.
+ <_>
+ 12 11 5 4 2.
+ <_>
+ 7 15 5 4 2.
+ <_>
+
+ <_>
+ 10 6 4 9 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+
+ <_>
+ 10 0 6 9 -1.
+ <_>
+ 12 0 2 9 3.
+ <_>
+
+ <_>
+ 2 9 16 8 -1.
+ <_>
+ 2 9 8 4 2.
+ <_>
+ 10 13 8 4 2.
+ <_>
+
+ <_>
+ 14 15 6 9 -1.
+ <_>
+ 14 18 6 3 3.
+ <_>
+
+ <_>
+ 8 7 6 9 -1.
+ <_>
+ 10 7 2 9 3.
+ <_>
+
+ <_>
+ 14 15 6 9 -1.
+ <_>
+ 14 18 6 3 3.
+ <_>
+
+ <_>
+ 3 12 12 6 -1.
+ <_>
+ 3 14 12 2 3.
+ <_>
+
+ <_>
+ 14 12 9 6 -1.
+ <_>
+ 14 14 9 2 3.
+ <_>
+
+ <_>
+ 1 12 9 6 -1.
+ <_>
+ 1 14 9 2 3.
+ <_>
+
+ <_>
+ 3 7 18 3 -1.
+ <_>
+ 3 8 18 1 3.
+ <_>
+
+ <_>
+ 1 7 22 6 -1.
+ <_>
+ 1 9 22 2 3.
+ <_>
+
+ <_>
+ 18 4 6 6 -1.
+ <_>
+ 18 7 6 3 2.
+ <_>
+
+ <_>
+ 0 4 6 6 -1.
+ <_>
+ 0 7 6 3 2.
+ <_>
+
+ <_>
+ 5 11 16 6 -1.
+ <_>
+ 5 14 16 3 2.
+ <_>
+
+ <_>
+ 6 16 9 4 -1.
+ <_>
+ 6 18 9 2 2.
+ <_>
+
+ <_>
+ 14 15 6 9 -1.
+ <_>
+ 14 18 6 3 3.
+ <_>
+
+ <_>
+ 4 15 6 9 -1.
+ <_>
+ 4 18 6 3 3.
+ <_>
+
+ <_>
+ 15 1 6 23 -1.
+ <_>
+ 17 1 2 23 3.
+ <_>
+
+ <_>
+ 0 21 24 3 -1.
+ <_>
+ 8 21 8 3 3.
+ <_>
+
+ <_>
+ 0 20 24 4 -1.
+ <_>
+ 8 20 8 4 3.
+ <_>
+
+ <_>
+ 3 1 6 23 -1.
+ <_>
+ 5 1 2 23 3.
+ <_>
+
+ <_>
+ 3 17 18 3 -1.
+ <_>
+ 3 18 18 1 3.
+ <_>
+
+ <_>
+ 0 16 18 3 -1.
+ <_>
+ 0 17 18 1 3.
+ <_>
+
+ <_>
+ 1 16 22 4 -1.
+ <_>
+ 12 16 11 2 2.
+ <_>
+ 1 18 11 2 2.
+ <_>
+
+ <_>
+ 0 16 9 6 -1.
+ <_>
+ 0 18 9 2 3.
+ <_>
+
+ <_>
+ 2 10 21 3 -1.
+ <_>
+ 9 10 7 3 3.
+ <_>
+
+ <_>
+ 2 18 12 6 -1.
+ <_>
+ 2 18 6 3 2.
+ <_>
+ 8 21 6 3 2.
+ <_>
+
+ <_>
+ 0 5 24 4 -1.
+ <_>
+ 0 7 24 2 2.
+ <_>
+
+ <_>
+ 10 2 4 15 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 10 7 6 12 -1.
+ <_>
+ 10 13 6 6 2.
+ <_>
+
+ <_>
+ 6 6 6 9 -1.
+ <_>
+ 8 6 2 9 3.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 9 7 6 9 -1.
+ <_>
+ 11 7 2 9 3.
+ <_>
+
+ <_>
+ 2 1 20 3 -1.
+ <_>
+ 2 2 20 1 3.
+ <_>
+
+ <_>
+ 1 18 12 6 -1.
+ <_>
+ 1 18 6 3 2.
+ <_>
+ 7 21 6 3 2.
+ <_>
+
+ <_>
+ 13 2 4 13 -1.
+ <_>
+ 13 2 2 13 2.
+ <_>
+
+ <_>
+ 6 7 12 4 -1.
+ <_>
+ 12 7 6 4 2.
+ <_>
+
+ <_>
+ 10 1 4 13 -1.
+ <_>
+ 10 1 2 13 2.
+ <_>
+
+ <_>
+ 6 0 3 18 -1.
+ <_>
+ 7 0 1 18 3.
+ <_>
+
+ <_>
+ 14 3 10 5 -1.
+ <_>
+ 14 3 5 5 2.
+ <_>
+
+ <_>
+ 6 15 12 8 -1.
+ <_>
+ 10 15 4 8 3.
+ <_>
+
+ <_>
+ 9 10 6 9 -1.
+ <_>
+ 11 10 2 9 3.
+ <_>
+
+ <_>
+ 8 3 4 9 -1.
+ <_>
+ 10 3 2 9 2.
+ <_>
+
+ <_>
+ 17 0 6 14 -1.
+ <_>
+ 20 0 3 7 2.
+ <_>
+ 17 7 3 7 2.
+ <_>
+
+ <_>
+ 1 0 6 14 -1.
+ <_>
+ 1 0 3 7 2.
+ <_>
+ 4 7 3 7 2.
+ <_>
+
+ <_>
+ 14 0 6 16 -1.
+ <_>
+ 17 0 3 8 2.
+ <_>
+ 14 8 3 8 2.
+ <_>
+
+ <_>
+ 7 4 4 10 -1.
+ <_>
+ 9 4 2 10 2.
+ <_>
+
+ <_>
+ 3 17 18 6 -1.
+ <_>
+ 12 17 9 3 2.
+ <_>
+ 3 20 9 3 2.
+ <_>
+
+ <_>
+ 1 20 22 4 -1.
+ <_>
+ 12 20 11 4 2.
+ <_>
+
+ <_>
+ 14 3 10 5 -1.
+ <_>
+ 14 3 5 5 2.
+ <_>
+
+ <_>
+ 0 3 10 5 -1.
+ <_>
+ 5 3 5 5 2.
+ <_>
+
+ <_>
+ 12 6 12 16 -1.
+ <_>
+ 16 6 4 16 3.
+ <_>
+
+ <_>
+ 0 6 12 16 -1.
+ <_>
+ 4 6 4 16 3.
+ <_>
+
+ <_>
+ 10 9 5 15 -1.
+ <_>
+ 10 14 5 5 3.
+ <_>
+
+ <_>
+ 1 18 21 2 -1.
+ <_>
+ 1 19 21 1 2.
+ <_>
+
+ <_>
+ 15 0 9 6 -1.
+ <_>
+ 15 2 9 2 3.
+ <_>
+
+ <_>
+ 6 1 12 4 -1.
+ <_>
+ 12 1 6 4 2.
+ <_>
+
+ <_>
+ 6 0 12 12 -1.
+ <_>
+ 12 0 6 6 2.
+ <_>
+ 6 6 6 6 2.
+ <_>
+
+ <_>
+ 8 10 8 12 -1.
+ <_>
+ 8 10 4 6 2.
+ <_>
+ 12 16 4 6 2.
+ <_>
+
+ <_>
+ 14 16 10 8 -1.
+ <_>
+ 19 16 5 4 2.
+ <_>
+ 14 20 5 4 2.
+ <_>
+
+ <_>
+ 0 16 10 8 -1.
+ <_>
+ 0 16 5 4 2.
+ <_>
+ 5 20 5 4 2.
+ <_>
+
+ <_>
+ 10 12 12 5 -1.
+ <_>
+ 14 12 4 5 3.
+ <_>
+
+ <_>
+ 6 16 10 8 -1.
+ <_>
+ 6 16 5 4 2.
+ <_>
+ 11 20 5 4 2.
+ <_>
+
+ <_>
+ 7 6 12 6 -1.
+ <_>
+ 13 6 6 3 2.
+ <_>
+ 7 9 6 3 2.
+ <_>
+
+ <_>
+ 9 6 4 18 -1.
+ <_>
+ 9 6 2 9 2.
+ <_>
+ 11 15 2 9 2.
+ <_>
+
+ <_>
+ 10 9 6 14 -1.
+ <_>
+ 13 9 3 7 2.
+ <_>
+ 10 16 3 7 2.
+ <_>
+
+ <_>
+ 8 9 6 14 -1.
+ <_>
+ 8 9 3 7 2.
+ <_>
+ 11 16 3 7 2.
+ <_>
+
+ <_>
+ 7 4 11 12 -1.
+ <_>
+ 7 10 11 6 2.
+ <_>
+
+ <_>
+ 4 8 6 16 -1.
+ <_>
+ 4 8 3 8 2.
+ <_>
+ 7 16 3 8 2.
+ <_>
+
+ <_>
+ 17 3 4 21 -1.
+ <_>
+ 17 10 4 7 3.
+ <_>
+
+ <_>
+ 3 3 4 21 -1.
+ <_>
+ 3 10 4 7 3.
+ <_>
+
+ <_>
+ 10 1 8 18 -1.
+ <_>
+ 14 1 4 9 2.
+ <_>
+ 10 10 4 9 2.
+ <_>
+
+ <_>
+ 2 5 16 8 -1.
+ <_>
+ 2 5 8 4 2.
+ <_>
+ 10 9 8 4 2.
+ <_>
+
+ <_>
+ 3 6 18 12 -1.
+ <_>
+ 3 10 18 4 3.
+ <_>
+
+ <_>
+ 4 10 16 12 -1.
+ <_>
+ 4 14 16 4 3.
+ <_>
+
+ <_>
+ 15 4 8 20 -1.
+ <_>
+ 19 4 4 10 2.
+ <_>
+ 15 14 4 10 2.
+ <_>
+
+ <_>
+ 7 2 9 6 -1.
+ <_>
+ 10 2 3 6 3.
+ <_>
+
+ <_>
+ 15 4 8 20 -1.
+ <_>
+ 19 4 4 10 2.
+ <_>
+ 15 14 4 10 2.
+ <_>
+
+ <_>
+ 1 4 8 20 -1.
+ <_>
+ 1 4 4 10 2.
+ <_>
+ 5 14 4 10 2.
+ <_>
+
+ <_>
+ 11 8 8 14 -1.
+ <_>
+ 15 8 4 7 2.
+ <_>
+ 11 15 4 7 2.
+ <_>
+
+ <_>
+ 5 8 8 14 -1.
+ <_>
+ 5 8 4 7 2.
+ <_>
+ 9 15 4 7 2.
+ <_>
+
+ <_>
+ 10 13 5 8 -1.
+ <_>
+ 10 17 5 4 2.
+ <_>
+
+ <_>
+ 4 13 7 9 -1.
+ <_>
+ 4 16 7 3 3.
+ <_>
+
+ <_>
+ 0 13 24 10 -1.
+ <_>
+ 0 18 24 5 2.
+ <_>
+
+ <_>
+ 4 2 8 11 -1.
+ <_>
+ 8 2 4 11 2.
+ <_>
+
+ <_>
+ 10 2 8 16 -1.
+ <_>
+ 14 2 4 8 2.
+ <_>
+ 10 10 4 8 2.
+ <_>
+
+ <_>
+ 0 2 24 6 -1.
+ <_>
+ 0 2 12 3 2.
+ <_>
+ 12 5 12 3 2.
+ <_>
+
+ <_>
+ 6 0 12 9 -1.
+ <_>
+ 6 3 12 3 3.
+ <_>
+
+ <_>
+ 1 2 12 12 -1.
+ <_>
+ 1 2 6 6 2.
+ <_>
+ 7 8 6 6 2.
+ <_>
+
+ <_>
+ 18 5 6 9 -1.
+ <_>
+ 18 8 6 3 3.
+ <_>
+
+ <_>
+ 4 3 8 10 -1.
+ <_>
+ 4 3 4 5 2.
+ <_>
+ 8 8 4 5 2.
+ <_>
+
+ <_>
+ 6 21 18 3 -1.
+ <_>
+ 6 22 18 1 3.
+ <_>
+
+ <_>
+ 1 10 18 2 -1.
+ <_>
+ 1 11 18 1 2.
+ <_>
+
+ <_>
+ 1 10 22 3 -1.
+ <_>
+ 1 11 22 1 3.
+ <_>
+
+ <_>
+ 2 8 12 9 -1.
+ <_>
+ 2 11 12 3 3.
+ <_>
+
+ <_>
+ 12 8 12 6 -1.
+ <_>
+ 18 8 6 3 2.
+ <_>
+ 12 11 6 3 2.
+ <_>
+
+ <_>
+ 0 8 12 6 -1.
+ <_>
+ 0 8 6 3 2.
+ <_>
+ 6 11 6 3 2.
+ <_>
+
+ <_>
+ 10 15 6 9 -1.
+ <_>
+ 12 15 2 9 3.
+ <_>
+
+ <_>
+ 7 13 9 6 -1.
+ <_>
+ 7 15 9 2 3.
+ <_>
+
+ <_>
+ 9 8 7 12 -1.
+ <_>
+ 9 14 7 6 2.
+ <_>
+
+ <_>
+ 4 13 9 6 -1.
+ <_>
+ 7 13 3 6 3.
+ <_>
+
+ <_>
+ 6 15 18 4 -1.
+ <_>
+ 12 15 6 4 3.
+ <_>
+
+ <_>
+ 5 4 4 16 -1.
+ <_>
+ 7 4 2 16 2.
+ <_>
+
+ <_>
+ 10 15 6 9 -1.
+ <_>
+ 12 15 2 9 3.
+ <_>
+
+ <_>
+ 8 15 6 9 -1.
+ <_>
+ 10 15 2 9 3.
+ <_>
+
+ <_>
+ 9 11 12 10 -1.
+ <_>
+ 15 11 6 5 2.
+ <_>
+ 9 16 6 5 2.
+ <_>
+
+ <_>
+ 3 6 14 6 -1.
+ <_>
+ 3 8 14 2 3.
+ <_>
+
+ <_>
+ 4 2 17 8 -1.
+ <_>
+ 4 6 17 4 2.
+ <_>
+
+ <_>
+ 6 2 12 21 -1.
+ <_>
+ 6 9 12 7 3.
+ <_>
+
+ <_>
+ 8 1 9 9 -1.
+ <_>
+ 8 4 9 3 3.
+ <_>
+
+ <_>
+ 0 7 24 3 -1.
+ <_>
+ 12 7 12 3 2.
+ <_>
+
+ <_>
+ 11 6 9 10 -1.
+ <_>
+ 11 11 9 5 2.
+ <_>
+
+ <_>
+ 2 11 18 3 -1.
+ <_>
+ 2 12 18 1 3.
+ <_>
+
+ <_>
+ 8 16 9 4 -1.
+ <_>
+ 8 18 9 2 2.
+ <_>
+
+ <_>
+ 0 0 9 6 -1.
+ <_>
+ 0 2 9 2 3.
+ <_>
+
+ <_>
+ 0 11 24 6 -1.
+ <_>
+ 0 13 24 2 3.
+ <_>
+
+ <_>
+ 2 9 20 6 -1.
+ <_>
+ 2 12 20 3 2.
+ <_>
+
+ <_>
+ 4 5 16 12 -1.
+ <_>
+ 12 5 8 6 2.
+ <_>
+ 4 11 8 6 2.
+ <_>
+
+ <_>
+ 10 2 4 15 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 7 3 10 4 -1.
+ <_>
+ 7 5 10 2 2.
+ <_>
+
+ <_>
+ 9 15 6 8 -1.
+ <_>
+ 9 19 6 4 2.
+ <_>
+
+ <_>
+ 17 0 7 10 -1.
+ <_>
+ 17 5 7 5 2.
+ <_>
+
+ <_>
+ 0 0 7 10 -1.
+ <_>
+ 0 5 7 5 2.
+ <_>
+
+ <_>
+ 16 1 6 12 -1.
+ <_>
+ 19 1 3 6 2.
+ <_>
+ 16 7 3 6 2.
+ <_>
+
+ <_>
+ 1 0 19 8 -1.
+ <_>
+ 1 4 19 4 2.
+ <_>
+
+ <_>
+ 12 2 9 4 -1.
+ <_>
+ 12 4 9 2 2.
+ <_>
+
+ <_>
+ 3 2 9 4 -1.
+ <_>
+ 3 4 9 2 2.
+ <_>
+
+ <_>
+ 12 2 10 6 -1.
+ <_>
+ 12 4 10 2 3.
+ <_>
+
+ <_>
+ 3 4 18 2 -1.
+ <_>
+ 12 4 9 2 2.
+ <_>
+
+ <_>
+ 12 1 4 9 -1.
+ <_>
+ 12 1 2 9 2.
+ <_>
+
+ <_>
+ 8 1 4 9 -1.
+ <_>
+ 10 1 2 9 2.
+ <_>
+
+ <_>
+ 10 5 8 10 -1.
+ <_>
+ 14 5 4 5 2.
+ <_>
+ 10 10 4 5 2.
+ <_>
+
+ <_>
+ 6 4 12 13 -1.
+ <_>
+ 10 4 4 13 3.
+ <_>
+
+ <_>
+ 13 5 6 6 -1.
+ <_>
+ 13 5 3 6 2.
+ <_>
+
+ <_>
+ 1 5 12 3 -1.
+ <_>
+ 7 5 6 3 2.
+ <_>
+
+ <_>
+ 7 5 10 6 -1.
+ <_>
+ 7 7 10 2 3.
+ <_>
+
+ <_>
+ 2 0 21 5 -1.
+ <_>
+ 9 0 7 5 3.
+ <_>
+
+ <_>
+ 0 8 9 9 -1.
+ <_>
+ 0 11 9 3 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 0 3 6 7 -1.
+ <_>
+ 3 3 3 7 2.
+ <_>
+
+ <_>
+ 9 18 12 6 -1.
+ <_>
+ 15 18 6 3 2.
+ <_>
+ 9 21 6 3 2.
+ <_>
+
+ <_>
+ 2 8 20 6 -1.
+ <_>
+ 2 8 10 3 2.
+ <_>
+ 12 11 10 3 2.
+ <_>
+
+ <_>
+ 13 2 10 4 -1.
+ <_>
+ 13 4 10 2 2.
+ <_>
+
+ <_>
+ 4 5 5 18 -1.
+ <_>
+ 4 11 5 6 3.
+ <_>
+
+ <_>
+ 20 4 4 9 -1.
+ <_>
+ 20 4 2 9 2.
+ <_>
+
+ <_>
+ 8 6 8 14 -1.
+ <_>
+ 8 13 8 7 2.
+ <_>
+
+ <_>
+ 0 1 24 6 -1.
+ <_>
+ 12 1 12 3 2.
+ <_>
+ 0 4 12 3 2.
+ <_>
+
+ <_>
+ 0 4 4 9 -1.
+ <_>
+ 2 4 2 9 2.
+ <_>
+
+ <_>
+ 3 6 18 3 -1.
+ <_>
+ 3 7 18 1 3.
+ <_>
+
+ <_>
+ 3 17 16 6 -1.
+ <_>
+ 3 19 16 2 3.
+ <_>
+
+ <_>
+ 13 6 6 9 -1.
+ <_>
+ 13 9 6 3 3.
+ <_>
+
+ <_>
+ 5 6 14 6 -1.
+ <_>
+ 5 6 7 3 2.
+ <_>
+ 12 9 7 3 2.
+ <_>
+
+ <_>
+ 13 5 8 10 -1.
+ <_>
+ 17 5 4 5 2.
+ <_>
+ 13 10 4 5 2.
+ <_>
+
+ <_>
+ 2 2 20 3 -1.
+ <_>
+ 2 3 20 1 3.
+ <_>
+
+ <_>
+ 9 2 9 6 -1.
+ <_>
+ 12 2 3 6 3.
+ <_>
+
+ <_>
+ 8 6 6 9 -1.
+ <_>
+ 10 6 2 9 3.
+ <_>
+
+ <_>
+ 12 3 4 11 -1.
+ <_>
+ 12 3 2 11 2.
+ <_>
+
+ <_>
+ 8 3 4 11 -1.
+ <_>
+ 10 3 2 11 2.
+ <_>
+
+ <_>
+ 8 3 8 10 -1.
+ <_>
+ 12 3 4 5 2.
+ <_>
+ 8 8 4 5 2.
+ <_>
+
+ <_>
+ 11 1 2 18 -1.
+ <_>
+ 12 1 1 18 2.
+ <_>
+
+ <_>
+ 9 2 9 6 -1.
+ <_>
+ 12 2 3 6 3.
+ <_>
+
+ <_>
+ 0 2 19 3 -1.
+ <_>
+ 0 3 19 1 3.
+ <_>
+
+ <_>
+ 9 14 9 6 -1.
+ <_>
+ 9 16 9 2 3.
+ <_>
+
+ <_>
+ 1 8 18 5 -1.
+ <_>
+ 7 8 6 5 3.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 6 0 6 9 -1.
+ <_>
+ 8 0 2 9 3.
+ <_>
+
+ <_>
+ 13 6 4 15 -1.
+ <_>
+ 13 11 4 5 3.
+ <_>
+
+ <_>
+ 1 5 18 3 -1.
+ <_>
+ 1 6 18 1 3.
+ <_>
+
+ <_>
+ 9 7 14 6 -1.
+ <_>
+ 9 9 14 2 3.
+ <_>
+
+ <_>
+ 2 16 18 3 -1.
+ <_>
+ 2 17 18 1 3.
+ <_>
+
+ <_>
+ 15 17 9 6 -1.
+ <_>
+ 15 19 9 2 3.
+ <_>
+
+ <_>
+ 0 8 12 6 -1.
+ <_>
+ 0 8 6 3 2.
+ <_>
+ 6 11 6 3 2.
+ <_>
+
+ <_>
+ 9 13 7 8 -1.
+ <_>
+ 9 17 7 4 2.
+ <_>
+
+ <_>
+ 2 17 20 3 -1.
+ <_>
+ 2 18 20 1 3.
+ <_>
+
+ <_>
+ 15 17 9 6 -1.
+ <_>
+ 15 19 9 2 3.
+ <_>
+
+ <_>
+ 4 0 15 4 -1.
+ <_>
+ 4 2 15 2 2.
+ <_>
+
+ <_>
+ 17 2 6 6 -1.
+ <_>
+ 17 5 6 3 2.
+ <_>
+
+ <_>
+ 0 3 6 9 -1.
+ <_>
+ 0 6 6 3 3.
+ <_>
+
+ <_>
+ 15 17 9 6 -1.
+ <_>
+ 15 19 9 2 3.
+ <_>
+
+ <_>
+ 0 17 9 6 -1.
+ <_>
+ 0 19 9 2 3.
+ <_>
+
+ <_>
+ 9 18 12 6 -1.
+ <_>
+ 15 18 6 3 2.
+ <_>
+ 9 21 6 3 2.
+ <_>
+
+ <_>
+ 3 15 6 9 -1.
+ <_>
+ 3 18 6 3 3.
+ <_>
+
+ <_>
+ 16 13 8 10 -1.
+ <_>
+ 20 13 4 5 2.
+ <_>
+ 16 18 4 5 2.
+ <_>
+
+ <_>
+ 0 14 24 4 -1.
+ <_>
+ 8 14 8 4 3.
+ <_>
+
+ <_>
+ 13 18 6 6 -1.
+ <_>
+ 13 18 3 6 2.
+ <_>
+
+ <_>
+ 0 13 8 10 -1.
+ <_>
+ 0 13 4 5 2.
+ <_>
+ 4 18 4 5 2.
+ <_>
+
+ <_>
+ 0 14 24 6 -1.
+ <_>
+ 0 17 24 3 2.
+ <_>
+
+ <_>
+ 5 2 12 8 -1.
+ <_>
+ 5 2 6 4 2.
+ <_>
+ 11 6 6 4 2.
+ <_>
+
+ <_>
+ 8 9 9 6 -1.
+ <_>
+ 11 9 3 6 3.
+ <_>
+
+ <_>
+ 4 3 16 4 -1.
+ <_>
+ 4 5 16 2 2.
+ <_>
+
+ <_>
+ 10 2 4 10 -1.
+ <_>
+ 10 7 4 5 2.
+ <_>
+
+ <_>
+ 8 4 5 8 -1.
+ <_>
+ 8 8 5 4 2.
+ <_>
+
+ <_>
+ 11 5 9 12 -1.
+ <_>
+ 11 9 9 4 3.
+ <_>
+
+ <_>
+ 4 5 9 12 -1.
+ <_>
+ 4 9 9 4 3.
+ <_>
+
+ <_>
+ 14 6 6 9 -1.
+ <_>
+ 14 9 6 3 3.
+ <_>
+
+ <_>
+ 2 4 20 12 -1.
+ <_>
+ 2 8 20 4 3.
+ <_>
+
+ <_>
+ 4 4 17 16 -1.
+ <_>
+ 4 12 17 8 2.
+ <_>
+
+ <_>
+ 8 7 7 6 -1.
+ <_>
+ 8 10 7 3 2.
+ <_>
+
+ <_>
+ 1 9 23 2 -1.
+ <_>
+ 1 10 23 1 2.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 13 3 4 9 -1.
+ <_>
+ 13 3 2 9 2.
+ <_>
+
+ <_>
+ 8 1 6 13 -1.
+ <_>
+ 10 1 2 13 3.
+ <_>
+
+ <_>
+ 4 22 18 2 -1.
+ <_>
+ 4 23 18 1 2.
+ <_>
+
+ <_>
+ 3 10 9 6 -1.
+ <_>
+ 6 10 3 6 3.
+ <_>
+
+ <_>
+ 14 0 2 24 -1.
+ <_>
+ 14 0 1 24 2.
+ <_>
+
+ <_>
+ 8 0 2 24 -1.
+ <_>
+ 9 0 1 24 2.
+ <_>
+
+ <_>
+ 3 2 18 10 -1.
+ <_>
+ 9 2 6 10 3.
+ <_>
+
+ <_>
+ 4 13 15 6 -1.
+ <_>
+ 9 13 5 6 3.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 9 21 6 3 3.
+ <_>
+
+ <_>
+ 9 1 4 11 -1.
+ <_>
+ 11 1 2 11 2.
+ <_>
+
+ <_>
+ 9 7 10 4 -1.
+ <_>
+ 9 7 5 4 2.
+ <_>
+
+ <_>
+ 7 0 10 18 -1.
+ <_>
+ 12 0 5 18 2.
+ <_>
+
+ <_>
+ 12 1 6 16 -1.
+ <_>
+ 14 1 2 16 3.
+ <_>
+
+ <_>
+ 6 1 6 16 -1.
+ <_>
+ 8 1 2 16 3.
+ <_>
+
+ <_>
+ 18 2 6 6 -1.
+ <_>
+ 18 5 6 3 2.
+ <_>
+
+ <_>
+ 3 5 18 2 -1.
+ <_>
+ 3 6 18 1 2.
+ <_>
+
+ <_>
+ 18 2 6 6 -1.
+ <_>
+ 18 5 6 3 2.
+ <_>
+
+ <_>
+ 0 2 6 6 -1.
+ <_>
+ 0 5 6 3 2.
+ <_>
+
+ <_>
+ 13 11 11 6 -1.
+ <_>
+ 13 13 11 2 3.
+ <_>
+
+ <_>
+ 5 7 10 4 -1.
+ <_>
+ 10 7 5 4 2.
+ <_>
+
+ <_>
+ 11 9 10 7 -1.
+ <_>
+ 11 9 5 7 2.
+ <_>
+
+ <_>
+ 3 9 10 7 -1.
+ <_>
+ 8 9 5 7 2.
+ <_>
+
+ <_>
+ 16 4 6 6 -1.
+ <_>
+ 16 4 3 6 2.
+ <_>
+
+ <_>
+ 5 6 10 8 -1.
+ <_>
+ 5 6 5 4 2.
+ <_>
+ 10 10 5 4 2.
+ <_>
+
+ <_>
+ 7 21 16 3 -1.
+ <_>
+ 7 21 8 3 2.
+ <_>
+
+ <_>
+ 1 21 16 3 -1.
+ <_>
+ 9 21 8 3 2.
+ <_>
+
+ <_>
+ 2 5 22 14 -1.
+ <_>
+ 13 5 11 7 2.
+ <_>
+ 2 12 11 7 2.
+ <_>
+
+ <_>
+ 3 10 8 10 -1.
+ <_>
+ 3 10 4 5 2.
+ <_>
+ 7 15 4 5 2.
+ <_>
+
+ <_>
+ 17 0 6 12 -1.
+ <_>
+ 20 0 3 6 2.
+ <_>
+ 17 6 3 6 2.
+ <_>
+
+ <_>
+ 5 2 6 18 -1.
+ <_>
+ 7 2 2 18 3.
+ <_>
+
+ <_>
+ 13 0 6 9 -1.
+ <_>
+ 15 0 2 9 3.
+ <_>
+
+ <_>
+ 0 12 7 9 -1.
+ <_>
+ 0 15 7 3 3.
+ <_>
+
+ <_>
+ 15 13 8 10 -1.
+ <_>
+ 19 13 4 5 2.
+ <_>
+ 15 18 4 5 2.
+ <_>
+
+ <_>
+ 1 0 6 12 -1.
+ <_>
+ 1 0 3 6 2.
+ <_>
+ 4 6 3 6 2.
+ <_>
+
+ <_>
+ 12 1 3 12 -1.
+ <_>
+ 12 7 3 6 2.
+ <_>
+
+ <_>
+ 1 13 8 10 -1.
+ <_>
+ 1 13 4 5 2.
+ <_>
+ 5 18 4 5 2.
+ <_>
+
+ <_>
+ 3 21 19 2 -1.
+ <_>
+ 3 22 19 1 2.
+ <_>
+
+ <_>
+ 6 3 4 13 -1.
+ <_>
+ 8 3 2 13 2.
+ <_>
+
+ <_>
+ 5 10 18 3 -1.
+ <_>
+ 5 11 18 1 3.
+ <_>
+
+ <_>
+ 9 3 5 12 -1.
+ <_>
+ 9 7 5 4 3.
+ <_>
+
+ <_>
+ 11 2 4 15 -1.
+ <_>
+ 11 7 4 5 3.
+ <_>
+
+ <_>
+ 4 1 16 4 -1.
+ <_>
+ 4 3 16 2 2.
+ <_>
+
+ <_>
+ 6 0 18 3 -1.
+ <_>
+ 6 1 18 1 3.
+ <_>
+
+ <_>
+ 5 1 10 8 -1.
+ <_>
+ 5 1 5 4 2.
+ <_>
+ 10 5 5 4 2.
+ <_>
+
+ <_>
+ 11 18 12 6 -1.
+ <_>
+ 17 18 6 3 2.
+ <_>
+ 11 21 6 3 2.
+ <_>
+
+ <_>
+ 5 15 12 3 -1.
+ <_>
+ 11 15 6 3 2.
+ <_>
+
+ <_>
+ 1 10 22 4 -1.
+ <_>
+ 1 10 11 4 2.
+ <_>
+
+ <_>
+ 7 9 9 6 -1.
+ <_>
+ 10 9 3 6 3.
+ <_>
+
+ <_>
+ 6 11 12 5 -1.
+ <_>
+ 10 11 4 5 3.
+ <_>
+
+ <_>
+ 6 7 10 7 -1.
+ <_>
+ 11 7 5 7 2.
+ <_>
+
+ <_>
+ 11 2 8 10 -1.
+ <_>
+ 11 2 4 10 2.
+ <_>
+
+ <_>
+ 5 2 8 10 -1.
+ <_>
+ 9 2 4 10 2.
+ <_>
+
+ <_>
+ 6 4 18 6 -1.
+ <_>
+ 15 4 9 3 2.
+ <_>
+ 6 7 9 3 2.
+ <_>
+
+ <_>
+ 0 5 10 9 -1.
+ <_>
+ 0 8 10 3 3.
+ <_>
+
+ <_>
+ 2 7 21 6 -1.
+ <_>
+ 2 9 21 2 3.
+ <_>
+
+ <_>
+ 0 4 22 16 -1.
+ <_>
+ 0 4 11 8 2.
+ <_>
+ 11 12 11 8 2.
+ <_>
+
+ <_>
+ 9 0 6 22 -1.
+ <_>
+ 9 11 6 11 2.
+ <_>
+
+ <_>
+ 9 1 3 12 -1.
+ <_>
+ 9 7 3 6 2.
+ <_>
+
+ <_>
+ 12 0 12 18 -1.
+ <_>
+ 18 0 6 9 2.
+ <_>
+ 12 9 6 9 2.
+ <_>
+
+ <_>
+ 0 0 12 18 -1.
+ <_>
+ 0 0 6 9 2.
+ <_>
+ 6 9 6 9 2.
+ <_>
+
+ <_>
+ 1 1 22 4 -1.
+ <_>
+ 12 1 11 2 2.
+ <_>
+ 1 3 11 2 2.
+ <_>
+
+ <_>
+ 3 0 18 4 -1.
+ <_>
+ 3 2 18 2 2.
+ <_>
+
+ <_>
+ 2 5 22 6 -1.
+ <_>
+ 2 7 22 2 3.
+ <_>
+
+ <_>
+ 5 0 6 9 -1.
+ <_>
+ 5 3 6 3 3.
+ <_>
+
+ <_>
+ 10 14 6 9 -1.
+ <_>
+ 12 14 2 9 3.
+ <_>
+
+ <_>
+ 8 14 6 9 -1.
+ <_>
+ 10 14 2 9 3.
+ <_>
+
+ <_>
+ 5 18 18 3 -1.
+ <_>
+ 5 19 18 1 3.
+ <_>
+
+ <_>
+ 6 0 6 13 -1.
+ <_>
+ 9 0 3 13 2.
+ <_>
+
+ <_>
+ 7 4 12 4 -1.
+ <_>
+ 7 4 6 4 2.
+ <_>
+
+ <_>
+ 5 2 12 6 -1.
+ <_>
+ 9 2 4 6 3.
+ <_>
+
+ <_>
+ 4 1 18 3 -1.
+ <_>
+ 4 2 18 1 3.
+ <_>
+
+ <_>
+ 0 8 6 12 -1.
+ <_>
+ 0 12 6 4 3.
+ <_>
+
+ <_>
+ 9 15 6 9 -1.
+ <_>
+ 11 15 2 9 3.
+ <_>
+
+ <_>
+ 9 10 6 13 -1.
+ <_>
+ 11 10 2 13 3.
+ <_>
+
+ <_>
+ 6 17 18 2 -1.
+ <_>
+ 6 18 18 1 2.
+ <_>
+
+ <_>
+ 9 4 6 9 -1.
+ <_>
+ 11 4 2 9 3.
+ <_>
+
+ <_>
+ 10 0 6 9 -1.
+ <_>
+ 12 0 2 9 3.
+ <_>
+
+ <_>
+ 5 6 10 8 -1.
+ <_>
+ 5 6 5 4 2.
+ <_>
+ 10 10 5 4 2.
+ <_>
+
+ <_>
+ 14 9 5 8 -1.
+ <_>
+ 14 13 5 4 2.
+ <_>
+
+ <_>
+ 5 9 5 8 -1.
+ <_>
+ 5 13 5 4 2.
+ <_>
+
+ <_>
+ 14 11 9 6 -1.
+ <_>
+ 14 13 9 2 3.
+ <_>
+
+ <_>
+ 0 2 23 15 -1.
+ <_>
+ 0 7 23 5 3.
+ <_>
+
+ <_>
+ 16 0 8 12 -1.
+ <_>
+ 16 6 8 6 2.
+ <_>
+
+ <_>
+ 4 15 6 9 -1.
+ <_>
+ 4 18 6 3 3.
+ <_>
+
+ <_>
+ 8 18 9 4 -1.
+ <_>
+ 8 20 9 2 2.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 13 11 11 6 -1.
+ <_>
+ 13 13 11 2 3.
+ <_>
+
+ <_>
+ 0 11 11 6 -1.
+ <_>
+ 0 13 11 2 3.
+ <_>
+
+ <_>
+ 0 9 24 6 -1.
+ <_>
+ 12 9 12 3 2.
+ <_>
+ 0 12 12 3 2.
+ <_>
+
+ <_>
+ 6 16 8 8 -1.
+ <_>
+ 6 20 8 4 2.
+ <_>
+
+ <_>
+ 10 16 14 6 -1.
+ <_>
+ 10 18 14 2 3.
+ <_>
+
+ <_>
+ 1 1 21 3 -1.
+ <_>
+ 1 2 21 1 3.
+ <_>
+
+ <_>
+ 0 2 24 3 -1.
+ <_>
+ 0 2 12 3 2.
+ <_>
+
+ <_>
+ 2 15 8 5 -1.
+ <_>
+ 6 15 4 5 2.
+ <_>
+
+ <_>
+ 2 11 21 3 -1.
+ <_>
+ 9 11 7 3 3.
+ <_>
+
+ <_>
+ 1 18 12 6 -1.
+ <_>
+ 1 18 6 3 2.
+ <_>
+ 7 21 6 3 2.
+ <_>
+
+ <_>
+ 10 14 4 10 -1.
+ <_>
+ 10 19 4 5 2.
+ <_>
+
+ <_>
+ 7 7 4 10 -1.
+ <_>
+ 7 12 4 5 2.
+ <_>
+
+ <_>
+ 9 8 6 12 -1.
+ <_>
+ 9 12 6 4 3.
+ <_>
+
+ <_>
+ 7 1 9 6 -1.
+ <_>
+ 10 1 3 6 3.
+ <_>
+
+ <_>
+ 3 14 19 2 -1.
+ <_>
+ 3 15 19 1 2.
+ <_>
+
+ <_>
+ 7 7 10 10 -1.
+ <_>
+ 7 7 5 5 2.
+ <_>
+ 12 12 5 5 2.
+ <_>
+
+ <_>
+ 3 12 18 12 -1.
+ <_>
+ 3 12 9 12 2.
+ <_>
+
+ <_>
+ 8 0 6 12 -1.
+ <_>
+ 10 0 2 12 3.
+ <_>
+
+ <_>
+ 3 0 17 9 -1.
+ <_>
+ 3 3 17 3 3.
+ <_>
+
+ <_>
+ 6 0 12 11 -1.
+ <_>
+ 10 0 4 11 3.
+ <_>
+
+ <_>
+ 1 0 6 13 -1.
+ <_>
+ 4 0 3 13 2.
+ <_>
+
+ <_>
+ 5 8 16 6 -1.
+ <_>
+ 5 11 16 3 2.
+ <_>
+
+ <_>
+ 8 8 5 12 -1.
+ <_>
+ 8 14 5 6 2.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 9 21 6 3 3.
+ <_>
+
+ <_>
+ 0 0 6 6 -1.
+ <_>
+ 3 0 3 6 2.
+ <_>
+
+ <_>
+ 2 0 20 3 -1.
+ <_>
+ 2 1 20 1 3.
+ <_>
+
+ <_>
+ 4 6 15 10 -1.
+ <_>
+ 9 6 5 10 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 11 0 2 9 3.
+ <_>
+
+ <_>
+ 14 0 6 9 -1.
+ <_>
+ 16 0 2 9 3.
+ <_>
+
+ <_>
+ 7 16 9 6 -1.
+ <_>
+ 7 18 9 2 3.
+ <_>
+
+ <_>
+ 14 0 6 9 -1.
+ <_>
+ 16 0 2 9 3.
+ <_>
+
+ <_>
+ 4 0 6 9 -1.
+ <_>
+ 6 0 2 9 3.
+ <_>
+
+ <_>
+ 17 1 6 16 -1.
+ <_>
+ 19 1 2 16 3.
+ <_>
+
+ <_>
+ 1 1 6 16 -1.
+ <_>
+ 3 1 2 16 3.
+ <_>
+
+ <_>
+ 14 13 6 9 -1.
+ <_>
+ 14 16 6 3 3.
+ <_>
+
+ <_>
+ 0 0 6 9 -1.
+ <_>
+ 0 3 6 3 3.
+ <_>
+
+ <_>
+ 9 5 6 6 -1.
+ <_>
+ 9 5 3 6 2.
+ <_>
+
+ <_>
+ 3 10 9 6 -1.
+ <_>
+ 6 10 3 6 3.
+ <_>
+
+ <_>
+ 14 7 3 16 -1.
+ <_>
+ 14 15 3 8 2.
+ <_>
+
+ <_>
+ 4 10 14 12 -1.
+ <_>
+ 4 10 7 6 2.
+ <_>
+ 11 16 7 6 2.
+ <_>
+
+ <_>
+ 7 6 12 6 -1.
+ <_>
+ 7 8 12 2 3.
+ <_>
+
+ <_>
+ 7 2 4 20 -1.
+ <_>
+ 9 2 2 20 2.
+ <_>
+
+ <_>
+ 14 13 6 9 -1.
+ <_>
+ 14 16 6 3 3.
+ <_>
+
+ <_>
+ 10 6 4 9 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+
+ <_>
+ 14 13 6 9 -1.
+ <_>
+ 14 16 6 3 3.
+ <_>
+
+ <_>
+ 5 20 14 4 -1.
+ <_>
+ 5 22 14 2 2.
+ <_>
+
+ <_>
+ 4 4 16 12 -1.
+ <_>
+ 4 10 16 6 2.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 3 0 21 4 -1.
+ <_>
+ 3 2 21 2 2.
+ <_>
+
+ <_>
+ 4 13 6 9 -1.
+ <_>
+ 4 16 6 3 3.
+ <_>
+
+ <_>
+ 16 16 5 8 -1.
+ <_>
+ 16 20 5 4 2.
+ <_>
+
+ <_>
+ 4 0 16 16 -1.
+ <_>
+ 4 0 8 8 2.
+ <_>
+ 12 8 8 8 2.
+ <_>
+
+ <_>
+ 6 6 14 6 -1.
+ <_>
+ 13 6 7 3 2.
+ <_>
+ 6 9 7 3 2.
+ <_>
+
+ <_>
+ 10 5 4 15 -1.
+ <_>
+ 10 10 4 5 3.
+ <_>
+
+ <_>
+ 9 15 12 8 -1.
+ <_>
+ 15 15 6 4 2.
+ <_>
+ 9 19 6 4 2.
+ <_>
+
+ <_>
+ 6 7 12 4 -1.
+ <_>
+ 12 7 6 4 2.
+ <_>
+
+ <_>
+ 5 6 14 6 -1.
+ <_>
+ 12 6 7 3 2.
+ <_>
+ 5 9 7 3 2.
+ <_>
+
+ <_>
+ 3 6 18 10 -1.
+ <_>
+ 3 6 9 5 2.
+ <_>
+ 12 11 9 5 2.
+ <_>
+
+ <_>
+ 6 0 18 21 -1.
+ <_>
+ 12 0 6 21 3.
+ <_>
+
+ <_>
+ 0 0 24 21 -1.
+ <_>
+ 8 0 8 21 3.
+ <_>
+
+ <_>
+ 6 18 18 3 -1.
+ <_>
+ 6 19 18 1 3.
+ <_>
+
+ <_>
+ 0 15 9 6 -1.
+ <_>
+ 0 17 9 2 3.
+ <_>
+
+ <_>
+ 4 3 19 2 -1.
+ <_>
+ 4 4 19 1 2.
+ <_>
+
+ <_>
+ 0 3 24 2 -1.
+ <_>
+ 0 4 24 1 2.
+ <_>
+
+ <_>
+ 15 14 9 4 -1.
+ <_>
+ 15 16 9 2 2.
+ <_>
+
+ <_>
+ 0 14 9 4 -1.
+ <_>
+ 0 16 9 2 2.
+ <_>
+
+ <_>
+ 6 15 18 2 -1.
+ <_>
+ 6 16 18 1 2.
+ <_>
+
+ <_>
+ 3 17 18 3 -1.
+ <_>
+ 3 18 18 1 3.
+ <_>
+
+ <_>
+ 12 0 3 23 -1.
+ <_>
+ 13 0 1 23 3.
+ <_>
+
+ <_>
+ 6 0 8 6 -1.
+ <_>
+ 6 3 8 3 2.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 9 0 3 23 -1.
+ <_>
+ 10 0 1 23 3.
+ <_>
+
+ <_>
+ 10 7 4 10 -1.
+ <_>
+ 10 12 4 5 2.
+ <_>
+
+ <_>
+ 7 8 10 12 -1.
+ <_>
+ 7 12 10 4 3.
+ <_>
+
+ <_>
+ 14 9 6 14 -1.
+ <_>
+ 17 9 3 7 2.
+ <_>
+ 14 16 3 7 2.
+ <_>
+
+ <_>
+ 2 0 10 9 -1.
+ <_>
+ 2 3 10 3 3.
+ <_>
+
+ <_>
+ 11 1 5 12 -1.
+ <_>
+ 11 7 5 6 2.
+ <_>
+
+ <_>
+ 1 4 12 10 -1.
+ <_>
+ 1 4 6 5 2.
+ <_>
+ 7 9 6 5 2.
+ <_>
+
+ <_>
+ 15 1 9 4 -1.
+ <_>
+ 15 3 9 2 2.
+ <_>
+
+ <_>
+ 1 2 8 10 -1.
+ <_>
+ 1 2 4 5 2.
+ <_>
+ 5 7 4 5 2.
+ <_>
+
+ <_>
+ 10 1 5 12 -1.
+ <_>
+ 10 5 5 4 3.
+ <_>
+
+ <_>
+ 4 0 14 24 -1.
+ <_>
+ 11 0 7 24 2.
+ <_>
+
+ <_>
+ 7 17 10 4 -1.
+ <_>
+ 7 19 10 2 2.
+ <_>
+
+ <_>
+ 10 14 4 10 -1.
+ <_>
+ 10 19 4 5 2.
+ <_>
+
+ <_>
+ 13 15 6 9 -1.
+ <_>
+ 15 15 2 9 3.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 3 22 18 1 3.
+ <_>
+
+ <_>
+ 13 15 6 9 -1.
+ <_>
+ 15 15 2 9 3.
+ <_>
+
+ <_>
+ 5 15 6 9 -1.
+ <_>
+ 7 15 2 9 3.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+ 10 15 2 9 2.
+ <_>
+
+ <_>
+ 7 3 6 11 -1.
+ <_>
+ 9 3 2 11 3.
+ <_>
+
+ <_>
+ 15 1 9 4 -1.
+ <_>
+ 15 3 9 2 2.
+ <_>
+
+ <_>
+ 5 4 14 8 -1.
+ <_>
+ 5 8 14 4 2.
+ <_>
+
+ <_>
+ 8 1 15 9 -1.
+ <_>
+ 8 4 15 3 3.
+ <_>
+
+ <_>
+ 7 2 8 10 -1.
+ <_>
+ 7 2 4 5 2.
+ <_>
+ 11 7 4 5 2.
+ <_>
+
+ <_>
+ 12 2 6 12 -1.
+ <_>
+ 12 2 3 12 2.
+ <_>
+
+ <_>
+ 6 2 6 12 -1.
+ <_>
+ 9 2 3 12 2.
+ <_>
+
+ <_>
+ 7 7 12 4 -1.
+ <_>
+ 7 7 6 4 2.
+ <_>
+
+ <_>
+ 6 3 12 10 -1.
+ <_>
+ 10 3 4 10 3.
+ <_>
+
+ <_>
+ 5 6 16 6 -1.
+ <_>
+ 13 6 8 3 2.
+ <_>
+ 5 9 8 3 2.
+ <_>
+
+ <_>
+ 3 1 18 9 -1.
+ <_>
+ 9 1 6 9 3.
+ <_>
+
+ <_>
+ 3 8 18 5 -1.
+ <_>
+ 9 8 6 5 3.
+ <_>
+
+ <_>
+ 0 0 24 22 -1.
+ <_>
+ 0 0 12 11 2.
+ <_>
+ 12 11 12 11 2.
+ <_>
+
+ <_>
+ 14 16 9 6 -1.
+ <_>
+ 14 18 9 2 3.
+ <_>
+
+ <_>
+ 0 16 24 8 -1.
+ <_>
+ 0 20 24 4 2.
+ <_>
+
+ <_>
+ 1 19 22 4 -1.
+ <_>
+ 12 19 11 2 2.
+ <_>
+ 1 21 11 2 2.
+ <_>
+
+ <_>
+ 1 16 9 6 -1.
+ <_>
+ 1 18 9 2 3.
+ <_>
+
+ <_>
+ 7 8 10 4 -1.
+ <_>
+ 7 8 5 4 2.
+ <_>
+
+ <_>
+ 9 15 6 9 -1.
+ <_>
+ 11 15 2 9 3.
+ <_>
+
+ <_>
+ 10 18 12 6 -1.
+ <_>
+ 16 18 6 3 2.
+ <_>
+ 10 21 6 3 2.
+ <_>
+
+ <_>
+ 2 18 12 6 -1.
+ <_>
+ 2 18 6 3 2.
+ <_>
+ 8 21 6 3 2.
+ <_>
+
+ <_>
+ 8 3 16 9 -1.
+ <_>
+ 8 6 16 3 3.
+ <_>
+
+ <_>
+ 0 5 10 6 -1.
+ <_>
+ 0 7 10 2 3.
+ <_>
+
+ <_>
+ 5 5 18 3 -1.
+ <_>
+ 5 6 18 1 3.
+ <_>
+
+ <_>
+ 2 6 9 6 -1.
+ <_>
+ 2 9 9 3 2.
+ <_>
+
+ <_>
+ 14 2 10 9 -1.
+ <_>
+ 14 5 10 3 3.
+ <_>
+
+ <_>
+ 3 6 18 3 -1.
+ <_>
+ 3 7 18 1 3.
+ <_>
+
+ <_>
+ 9 2 15 6 -1.
+ <_>
+ 9 4 15 2 3.
+ <_>
+
+ <_>
+ 4 8 15 6 -1.
+ <_>
+ 4 10 15 2 3.
+ <_>
+
+ <_>
+ 0 5 24 4 -1.
+ <_>
+ 12 5 12 2 2.
+ <_>
+ 0 7 12 2 2.
+ <_>
+
+ <_>
+ 7 8 6 12 -1.
+ <_>
+ 9 8 2 12 3.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 0 12 6 12 -1.
+ <_>
+ 0 12 3 6 2.
+ <_>
+ 3 18 3 6 2.
+ <_>
+
+ <_>
+ 14 12 10 6 -1.
+ <_>
+ 14 14 10 2 3.
+ <_>
+
+ <_>
+ 2 7 18 9 -1.
+ <_>
+ 2 10 18 3 3.
+ <_>
+
+ <_>
+ 11 14 10 9 -1.
+ <_>
+ 11 17 10 3 3.
+ <_>
+
+ <_>
+ 7 6 10 8 -1.
+ <_>
+ 7 6 5 4 2.
+ <_>
+ 12 10 5 4 2.
+ <_>
+
+ <_>
+ 6 6 14 6 -1.
+ <_>
+ 13 6 7 3 2.
+ <_>
+ 6 9 7 3 2.
+ <_>
+
+ <_>
+ 4 13 9 7 -1.
+ <_>
+ 7 13 3 7 3.
+ <_>
+
+ <_>
+ 14 10 6 12 -1.
+ <_>
+ 17 10 3 6 2.
+ <_>
+ 14 16 3 6 2.
+ <_>
+
+ <_>
+ 4 10 6 12 -1.
+ <_>
+ 4 10 3 6 2.
+ <_>
+ 7 16 3 6 2.
+ <_>
+
+ <_>
+ 13 9 8 6 -1.
+ <_>
+ 13 9 4 6 2.
+ <_>
+
+ <_>
+ 8 3 4 14 -1.
+ <_>
+ 10 3 2 14 2.
+ <_>
+
+ <_>
+ 17 0 3 18 -1.
+ <_>
+ 18 0 1 18 3.
+ <_>
+
+ <_>
+ 4 12 16 12 -1.
+ <_>
+ 12 12 8 12 2.
+ <_>
+
+ <_>
+ 15 0 6 14 -1.
+ <_>
+ 17 0 2 14 3.
+ <_>
+
+ <_>
+ 3 0 6 14 -1.
+ <_>
+ 5 0 2 14 3.
+ <_>
+
+ <_>
+ 12 2 12 20 -1.
+ <_>
+ 16 2 4 20 3.
+ <_>
+
+ <_>
+ 0 2 12 20 -1.
+ <_>
+ 4 2 4 20 3.
+ <_>
+
+ <_>
+ 16 0 6 17 -1.
+ <_>
+ 18 0 2 17 3.
+ <_>
+
+ <_>
+ 2 0 6 17 -1.
+ <_>
+ 4 0 2 17 3.
+ <_>
+
+ <_>
+ 15 6 9 6 -1.
+ <_>
+ 15 8 9 2 3.
+ <_>
+
+ <_>
+ 0 6 9 6 -1.
+ <_>
+ 0 8 9 2 3.
+ <_>
+
+ <_>
+ 18 1 6 13 -1.
+ <_>
+ 20 1 2 13 3.
+ <_>
+
+ <_>
+ 0 1 6 13 -1.
+ <_>
+ 2 1 2 13 3.
+ <_>
+
+ <_>
+ 16 0 4 9 -1.
+ <_>
+ 16 0 2 9 2.
+ <_>
+
+ <_>
+ 5 10 12 7 -1.
+ <_>
+ 9 10 4 7 3.
+ <_>
+
+ <_>
+ 12 9 12 6 -1.
+ <_>
+ 12 11 12 2 3.
+ <_>
+
+ <_>
+ 0 9 12 6 -1.
+ <_>
+ 0 11 12 2 3.
+ <_>
+
+ <_>
+ 5 7 14 9 -1.
+ <_>
+ 5 10 14 3 3.
+ <_>
+
+ <_>
+ 0 15 20 3 -1.
+ <_>
+ 0 16 20 1 3.
+ <_>
+
+ <_>
+ 8 10 8 10 -1.
+ <_>
+ 12 10 4 5 2.
+ <_>
+ 8 15 4 5 2.
+ <_>
+
+ <_>
+ 5 4 13 9 -1.
+ <_>
+ 5 7 13 3 3.
+ <_>
+
+ <_>
+ 10 2 6 18 -1.
+ <_>
+ 10 8 6 6 3.
+ <_>
+
+ <_>
+ 6 0 6 9 -1.
+ <_>
+ 8 0 2 9 3.
+ <_>
+
+ <_>
+ 6 9 12 4 -1.
+ <_>
+ 6 11 12 2 2.
+ <_>
+
+ <_>
+ 3 2 15 12 -1.
+ <_>
+ 3 6 15 4 3.
+ <_>
+
+ <_>
+ 12 0 12 5 -1.
+ <_>
+ 16 0 4 5 3.
+ <_>
+
+ <_>
+ 0 15 18 3 -1.
+ <_>
+ 6 15 6 3 3.
+ <_>
+
+ <_>
+ 0 14 24 5 -1.
+ <_>
+ 8 14 8 5 3.
+ <_>
+
+ <_>
+ 5 1 3 18 -1.
+ <_>
+ 6 1 1 18 3.
+ <_>
+
+ <_>
+ 10 0 4 14 -1.
+ <_>
+ 10 0 2 14 2.
+ <_>
+
+ <_>
+ 9 3 4 9 -1.
+ <_>
+ 11 3 2 9 2.
+ <_>
+
+ <_>
+ 8 2 12 6 -1.
+ <_>
+ 14 2 6 3 2.
+ <_>
+ 8 5 6 3 2.
+ <_>
+
+ <_>
+ 0 4 17 4 -1.
+ <_>
+ 0 6 17 2 2.
+ <_>
+
+ <_>
+ 16 16 5 8 -1.
+ <_>
+ 16 20 5 4 2.
+ <_>
+
+ <_>
+ 3 16 5 8 -1.
+ <_>
+ 3 20 5 4 2.
+ <_>
+
+ <_>
+ 6 18 18 2 -1.
+ <_>
+ 6 19 18 1 2.
+ <_>
+
+ <_>
+ 0 0 12 5 -1.
+ <_>
+ 4 0 4 5 3.
+ <_>
+
+ <_>
+ 14 3 6 12 -1.
+ <_>
+ 17 3 3 6 2.
+ <_>
+ 14 9 3 6 2.
+ <_>
+
+ <_>
+ 0 12 6 12 -1.
+ <_>
+ 2 12 2 12 3.
+ <_>
+
+ <_>
+ 2 3 21 3 -1.
+ <_>
+ 2 4 21 1 3.
+ <_>
+
+ <_>
+ 4 3 6 12 -1.
+ <_>
+ 4 3 3 6 2.
+ <_>
+ 7 9 3 6 2.
+ <_>
+
+ <_>
+ 12 8 12 6 -1.
+ <_>
+ 18 8 6 3 2.
+ <_>
+ 12 11 6 3 2.
+ <_>
+
+ <_>
+ 0 15 16 9 -1.
+ <_>
+ 8 15 8 9 2.
+ <_>
+
+ <_>
+ 6 13 18 5 -1.
+ <_>
+ 6 13 9 5 2.
+ <_>
+
+ <_>
+ 1 6 15 6 -1.
+ <_>
+ 6 6 5 6 3.
+ <_>
+
+ <_>
+ 11 9 9 6 -1.
+ <_>
+ 14 9 3 6 3.
+ <_>
+
+ <_>
+ 3 0 15 11 -1.
+ <_>
+ 8 0 5 11 3.
+ <_>
+
+ <_>
+ 15 3 3 18 -1.
+ <_>
+ 15 9 3 6 3.
+ <_>
+
+ <_>
+ 6 3 3 18 -1.
+ <_>
+ 6 9 3 6 3.
+ <_>
+
+ <_>
+ 9 5 10 8 -1.
+ <_>
+ 14 5 5 4 2.
+ <_>
+ 9 9 5 4 2.
+ <_>
+
+ <_>
+ 4 4 16 8 -1.
+ <_>
+ 4 4 8 4 2.
+ <_>
+ 12 8 8 4 2.
+ <_>
+
+ <_>
+ 7 7 12 3 -1.
+ <_>
+ 7 7 6 3 2.
+ <_>
+
+ <_>
+ 5 0 9 13 -1.
+ <_>
+ 8 0 3 13 3.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 8 1 10 9 -1.
+ <_>
+ 8 4 10 3 3.
+ <_>
+
+ <_>
+ 0 2 18 2 -1.
+ <_>
+ 0 3 18 1 2.
+ <_>
+
+ <_>
+ 10 13 14 6 -1.
+ <_>
+ 17 13 7 3 2.
+ <_>
+ 10 16 7 3 2.
+ <_>
+
+ <_>
+ 0 13 14 6 -1.
+ <_>
+ 0 13 7 3 2.
+ <_>
+ 7 16 7 3 2.
+ <_>
+
+ <_>
+ 20 2 3 21 -1.
+ <_>
+ 21 2 1 21 3.
+ <_>
+
+ <_>
+ 0 9 5 12 -1.
+ <_>
+ 0 13 5 4 3.
+ <_>
+
+ <_>
+ 12 6 12 6 -1.
+ <_>
+ 12 8 12 2 3.
+ <_>
+
+ <_>
+ 1 8 20 3 -1.
+ <_>
+ 1 9 20 1 3.
+ <_>
+
+ <_>
+ 5 7 19 3 -1.
+ <_>
+ 5 8 19 1 3.
+ <_>
+
+ <_>
+ 1 12 9 6 -1.
+ <_>
+ 1 14 9 2 3.
+ <_>
+
+ <_>
+ 6 10 14 12 -1.
+ <_>
+ 6 14 14 4 3.
+ <_>
+
+ <_>
+ 5 6 14 18 -1.
+ <_>
+ 5 12 14 6 3.
+ <_>
+
+ <_>
+ 11 12 9 7 -1.
+ <_>
+ 14 12 3 7 3.
+ <_>
+
+ <_>
+ 1 15 18 4 -1.
+ <_>
+ 1 17 18 2 2.
+ <_>
+
+ <_>
+ 11 14 6 9 -1.
+ <_>
+ 11 17 6 3 3.
+ <_>
+
+ <_>
+ 0 8 18 4 -1.
+ <_>
+ 0 8 9 2 2.
+ <_>
+ 9 10 9 2 2.
+ <_>
+
+ <_>
+ 3 10 20 6 -1.
+ <_>
+ 13 10 10 3 2.
+ <_>
+ 3 13 10 3 2.
+ <_>
+
+ <_>
+ 1 10 20 6 -1.
+ <_>
+ 1 10 10 3 2.
+ <_>
+ 11 13 10 3 2.
+ <_>
+
+ <_>
+ 0 9 24 2 -1.
+ <_>
+ 0 9 12 2 2.
+ <_>
+
+ <_>
+ 1 12 20 8 -1.
+ <_>
+ 1 12 10 4 2.
+ <_>
+ 11 16 10 4 2.
+ <_>
+
+ <_>
+ 11 12 9 7 -1.
+ <_>
+ 14 12 3 7 3.
+ <_>
+
+ <_>
+ 4 12 9 7 -1.
+ <_>
+ 7 12 3 7 3.
+ <_>
+
+ <_>
+ 12 12 8 5 -1.
+ <_>
+ 12 12 4 5 2.
+ <_>
+
+ <_>
+ 4 12 8 5 -1.
+ <_>
+ 8 12 4 5 2.
+ <_>
+
+ <_>
+ 13 10 4 10 -1.
+ <_>
+ 13 10 2 10 2.
+ <_>
+
+ <_>
+ 1 15 20 2 -1.
+ <_>
+ 11 15 10 2 2.
+ <_>
+
+ <_>
+ 9 10 6 6 -1.
+ <_>
+ 9 10 3 6 2.
+ <_>
+
+ <_>
+ 0 1 21 3 -1.
+ <_>
+ 7 1 7 3 3.
+ <_>
+
+ <_>
+ 6 4 13 9 -1.
+ <_>
+ 6 7 13 3 3.
+ <_>
+
+ <_>
+ 6 5 12 5 -1.
+ <_>
+ 10 5 4 5 3.
+ <_>
+
+ <_>
+ 10 10 10 6 -1.
+ <_>
+ 10 12 10 2 3.
+ <_>
+
+ <_>
+ 6 12 5 8 -1.
+ <_>
+ 6 16 5 4 2.
+ <_>
+
+ <_>
+ 13 0 6 9 -1.
+ <_>
+ 15 0 2 9 3.
+ <_>
+
+ <_>
+ 2 10 18 6 -1.
+ <_>
+ 8 10 6 6 3.
+ <_>
+
+ <_>
+ 11 2 9 4 -1.
+ <_>
+ 11 4 9 2 2.
+ <_>
+
+ <_>
+ 1 20 21 3 -1.
+ <_>
+ 8 20 7 3 3.
+ <_>
+
+ <_>
+ 1 10 22 2 -1.
+ <_>
+ 1 11 22 1 2.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 13 0 6 9 -1.
+ <_>
+ 15 0 2 9 3.
+ <_>
+
+ <_>
+ 5 0 6 9 -1.
+ <_>
+ 7 0 2 9 3.
+ <_>
+
+ <_>
+ 18 2 6 20 -1.
+ <_>
+ 20 2 2 20 3.
+ <_>
+
+ <_>
+ 0 2 6 20 -1.
+ <_>
+ 2 2 2 20 3.
+ <_>
+
+ <_>
+ 11 7 6 14 -1.
+ <_>
+ 14 7 3 7 2.
+ <_>
+ 11 14 3 7 2.
+ <_>
+
+ <_>
+ 0 1 4 9 -1.
+ <_>
+ 2 1 2 9 2.
+ <_>
+
+ <_>
+ 12 14 9 4 -1.
+ <_>
+ 12 16 9 2 2.
+ <_>
+
+ <_>
+ 1 13 9 4 -1.
+ <_>
+ 1 15 9 2 2.
+ <_>
+
+ <_>
+ 7 6 15 6 -1.
+ <_>
+ 7 8 15 2 3.
+ <_>
+
+ <_>
+ 8 2 3 18 -1.
+ <_>
+ 8 8 3 6 3.
+ <_>
+
+ <_>
+ 6 6 12 6 -1.
+ <_>
+ 12 6 6 3 2.
+ <_>
+ 6 9 6 3 2.
+ <_>
+
+ <_>
+ 2 19 20 4 -1.
+ <_>
+ 2 19 10 2 2.
+ <_>
+ 12 21 10 2 2.
+ <_>
+
+ <_>
+ 14 15 6 9 -1.
+ <_>
+ 14 18 6 3 3.
+ <_>
+
+ <_>
+ 3 5 18 14 -1.
+ <_>
+ 3 5 9 7 2.
+ <_>
+ 12 12 9 7 2.
+ <_>
+
+ <_>
+ 15 6 4 18 -1.
+ <_>
+ 17 6 2 9 2.
+ <_>
+ 15 15 2 9 2.
+ <_>
+
+ <_>
+ 5 6 4 18 -1.
+ <_>
+ 5 6 2 9 2.
+ <_>
+ 7 15 2 9 2.
+ <_>
+
+ <_>
+ 11 0 6 9 -1.
+ <_>
+ 13 0 2 9 3.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 11 5 6 9 -1.
+ <_>
+ 13 5 2 9 3.
+ <_>
+
+ <_>
+ 9 5 6 6 -1.
+ <_>
+ 12 5 3 6 2.
+ <_>
+
+ <_>
+ 4 1 16 6 -1.
+ <_>
+ 12 1 8 3 2.
+ <_>
+ 4 4 8 3 2.
+ <_>
+
+ <_>
+ 9 13 6 11 -1.
+ <_>
+ 11 13 2 11 3.
+ <_>
+
+ <_>
+ 17 1 6 12 -1.
+ <_>
+ 20 1 3 6 2.
+ <_>
+ 17 7 3 6 2.
+ <_>
+
+ <_>
+ 1 17 18 3 -1.
+ <_>
+ 1 18 18 1 3.
+ <_>
+
+ <_>
+ 7 13 10 8 -1.
+ <_>
+ 7 17 10 4 2.
+ <_>
+
+ <_>
+ 6 18 10 6 -1.
+ <_>
+ 6 20 10 2 3.
+ <_>
+
+ <_>
+ 9 14 9 4 -1.
+ <_>
+ 9 16 9 2 2.
+ <_>
+
+ <_>
+ 1 1 6 12 -1.
+ <_>
+ 1 1 3 6 2.
+ <_>
+ 4 7 3 6 2.
+ <_>
+
+ <_>
+ 19 4 5 12 -1.
+ <_>
+ 19 8 5 4 3.
+ <_>
+
+ <_>
+ 0 0 8 8 -1.
+ <_>
+ 4 0 4 8 2.
+ <_>
+
+ <_>
+ 3 5 19 3 -1.
+ <_>
+ 3 6 19 1 3.
+ <_>
+
+ <_>
+ 1 5 12 6 -1.
+ <_>
+ 1 5 6 3 2.
+ <_>
+ 7 8 6 3 2.
+ <_>
+
+ <_>
+ 2 1 21 8 -1.
+ <_>
+ 9 1 7 8 3.
+ <_>
+
+ <_>
+ 4 1 16 8 -1.
+ <_>
+ 4 5 16 4 2.
+ <_>
+
+ <_>
+ 6 0 18 3 -1.
+ <_>
+ 6 1 18 1 3.
+ <_>
+
+ <_>
+ 4 4 10 14 -1.
+ <_>
+ 4 11 10 7 2.
+ <_>
+
+ <_>
+ 15 6 4 10 -1.
+ <_>
+ 15 11 4 5 2.
+ <_>
+
+ <_>
+ 3 18 18 3 -1.
+ <_>
+ 9 18 6 3 3.
+ <_>
+
+ <_>
+ 8 18 12 6 -1.
+ <_>
+ 12 18 4 6 3.
+ <_>
+
+ <_>
+ 3 15 6 9 -1.
+ <_>
+ 6 15 3 9 2.
+ <_>
+
+ <_>
+ 15 7 6 8 -1.
+ <_>
+ 15 11 6 4 2.
+ <_>
+
+ <_>
+ 3 7 6 8 -1.
+ <_>
+ 3 11 6 4 2.
+ <_>
+
+ <_>
+ 5 9 18 6 -1.
+ <_>
+ 14 9 9 3 2.
+ <_>
+ 5 12 9 3 2.
+ <_>
+
+ <_>
+ 1 13 12 6 -1.
+ <_>
+ 1 15 12 2 3.
+ <_>
+
+ <_>
+ 14 15 10 6 -1.
+ <_>
+ 14 17 10 2 3.
+ <_>
+
+ <_>
+ 0 15 10 6 -1.
+ <_>
+ 0 17 10 2 3.
+ <_>
+
+ <_>
+ 15 13 6 9 -1.
+ <_>
+ 15 16 6 3 3.
+ <_>
+
+ <_>
+ 3 13 6 9 -1.
+ <_>
+ 3 16 6 3 3.
+ <_>
+
+ <_>
+ 9 5 8 8 -1.
+ <_>
+ 9 5 4 8 2.
+ <_>
+
+ <_>
+ 1 18 12 6 -1.
+ <_>
+ 1 18 6 3 2.
+ <_>
+ 7 21 6 3 2.
+ <_>
+
+ <_>
+ 13 19 10 4 -1.
+ <_>
+ 13 21 10 2 2.
+ <_>
+
+ <_>
+ 1 19 10 4 -1.
+ <_>
+ 1 21 10 2 2.
+ <_>
+
+ <_>
+ 6 19 18 3 -1.
+ <_>
+ 6 20 18 1 3.
+ <_>
+
+ <_>
+ 8 14 4 10 -1.
+ <_>
+ 8 19 4 5 2.
+ <_>
+
+ <_>
+ 0 0 24 6 -1.
+ <_>
+ 0 2 24 2 3.
+ <_>
+
+ <_>
+ 0 1 6 9 -1.
+ <_>
+ 0 4 6 3 3.
+ <_>
+
+ <_>
+ 4 9 20 6 -1.
+ <_>
+ 14 9 10 3 2.
+ <_>
+ 4 12 10 3 2.
+ <_>
+
+ <_>
+ 1 15 19 8 -1.
+ <_>
+ 1 19 19 4 2.
+ <_>
+
+ <_>
+ 14 0 10 6 -1.
+ <_>
+ 14 2 10 2 3.
+ <_>
+
+ <_>
+ 1 10 21 14 -1.
+ <_>
+ 8 10 7 14 3.
+ <_>
+
+ <_>
+ 10 10 8 8 -1.
+ <_>
+ 10 10 4 8 2.
+ <_>
+
+ <_>
+ 6 8 10 4 -1.
+ <_>
+ 11 8 5 4 2.
+ <_>
+
+ <_>
+ 10 5 4 9 -1.
+ <_>
+ 10 5 2 9 2.
+ <_>
+
+ <_>
+ 7 5 6 10 -1.
+ <_>
+ 9 5 2 10 3.
+ <_>
+
+ <_>
+ 14 4 4 13 -1.
+ <_>
+ 14 4 2 13 2.
+ <_>
+
+ <_>
+ 6 4 4 13 -1.
+ <_>
+ 8 4 2 13 2.
+ <_>
+
+ <_>
+ 8 7 9 6 -1.
+ <_>
+ 11 7 3 6 3.
+ <_>
+
+ <_>
+ 3 6 16 6 -1.
+ <_>
+ 3 6 8 3 2.
+ <_>
+ 11 9 8 3 2.
+ <_>
+
+ <_>
+ 5 4 16 14 -1.
+ <_>
+ 13 4 8 7 2.
+ <_>
+ 5 11 8 7 2.
+ <_>
+
+ <_>
+ 0 0 24 4 -1.
+ <_>
+ 0 0 12 2 2.
+ <_>
+ 12 2 12 2 2.
+ <_>
+
+ <_>
+ 9 1 9 6 -1.
+ <_>
+ 12 1 3 6 3.
+ <_>
+
+ <_>
+ 4 1 14 4 -1.
+ <_>
+ 11 1 7 4 2.
+ <_>
+
+ <_>
+ 10 14 7 9 -1.
+ <_>
+ 10 17 7 3 3.
+ <_>
+
+ <_>
+ 8 3 8 10 -1.
+ <_>
+ 8 3 4 5 2.
+ <_>
+ 12 8 4 5 2.
+ <_>
+
+ <_>
+ 7 3 12 5 -1.
+ <_>
+ 11 3 4 5 3.
+ <_>
+
+ <_>
+ 8 2 4 13 -1.
+ <_>
+ 10 2 2 13 2.
+ <_>
+
+ <_>
+ 11 2 3 19 -1.
+ <_>
+ 12 2 1 19 3.
+ <_>
+
+ <_>
+ 7 7 9 6 -1.
+ <_>
+ 10 7 3 6 3.
+ <_>
+
+ <_>
+ 4 22 20 2 -1.
+ <_>
+ 4 22 10 2 2.
+ <_>
+
+ <_>
+ 0 16 24 4 -1.
+ <_>
+ 0 16 12 2 2.
+ <_>
+ 12 18 12 2 2.
+ <_>
+
+ <_>
+ 7 3 12 5 -1.
+ <_>
+ 11 3 4 5 3.
+ <_>
+
+ <_>
+ 1 10 8 14 -1.
+ <_>
+ 1 10 4 7 2.
+ <_>
+ 5 17 4 7 2.
+ <_>
+
+ <_>
+ 11 16 6 6 -1.
+ <_>
+ 11 19 6 3 2.
+ <_>
+
+ <_>
+ 6 0 10 24 -1.
+ <_>
+ 6 0 5 12 2.
+ <_>
+ 11 12 5 12 2.
+ <_>
+
+ <_>
+ 7 5 14 14 -1.
+ <_>
+ 14 5 7 7 2.
+ <_>
+ 7 12 7 7 2.
+ <_>
+
+ <_>
+ 7 8 10 8 -1.
+ <_>
+ 7 8 5 4 2.
+ <_>
+ 12 12 5 4 2.
+ <_>
+
+ <_>
+ 9 1 9 6 -1.
+ <_>
+ 12 1 3 6 3.
+ <_>
+
+ <_>
+ 0 6 24 3 -1.
+ <_>
+ 12 6 12 3 2.
+ <_>
+
+ <_>
+ 7 3 12 5 -1.
+ <_>
+ 11 3 4 5 3.
+ <_>
+
+ <_>
+ 1 13 22 4 -1.
+ <_>
+ 1 13 11 2 2.
+ <_>
+ 12 15 11 2 2.
+ <_>
+
+ <_>
+ 9 12 12 6 -1.
+ <_>
+ 9 14 12 2 3.
+ <_>
+
+ <_>
+ 0 5 9 6 -1.
+ <_>
+ 0 7 9 2 3.
+ <_>
+
+ <_>
+ 1 5 23 6 -1.
+ <_>
+ 1 7 23 2 3.
+ <_>
+
+ <_>
+ 1 6 19 12 -1.
+ <_>
+ 1 10 19 4 3.
+ <_>
+
+ <_>
+ 9 1 6 21 -1.
+ <_>
+ 9 8 6 7 3.
+ <_>
+
+ <_>
+ 3 19 18 3 -1.
+ <_>
+ 9 19 6 3 3.
+ <_>
+
+ <_>
+ 9 14 6 9 -1.
+ <_>
+ 11 14 2 9 3.
+ <_>
+
+ <_>
+ 9 6 4 12 -1.
+ <_>
+ 11 6 2 12 2.
+ <_>
+
+ <_>
+ 16 0 6 9 -1.
+ <_>
+ 18 0 2 9 3.
+ <_>
+
+ <_>
+ 2 0 6 9 -1.
+ <_>
+ 4 0 2 9 3.
+ <_>
+
+ <_>
+ 13 1 4 22 -1.
+ <_>
+ 15 1 2 11 2.
+ <_>
+ 13 12 2 11 2.
+ <_>
+
+ <_>
+ 1 8 8 12 -1.
+ <_>
+ 1 14 8 6 2.
+ <_>
+
+ <_>
+ 14 7 7 9 -1.
+ <_>
+ 14 10 7 3 3.
+ <_>
+
+ <_>
+ 3 12 18 4 -1.
+ <_>
+ 3 12 9 2 2.
+ <_>
+ 12 14 9 2 2.
+ <_>
+
+ <_>
+ 13 1 4 22 -1.
+ <_>
+ 15 1 2 11 2.
+ <_>
+ 13 12 2 11 2.
+ <_>
+
+ <_>
+ 7 1 4 22 -1.
+ <_>
+ 7 1 2 11 2.
+ <_>
+ 9 12 2 11 2.
+ <_>
+
+ <_>
+ 4 7 20 4 -1.
+ <_>
+ 14 7 10 2 2.
+ <_>
+ 4 9 10 2 2.
+ <_>
+
+ <_>
+ 9 10 6 7 -1.
+ <_>
+ 12 10 3 7 2.
+ <_>
+
+ <_>
+ 7 7 10 4 -1.
+ <_>
+ 7 7 5 4 2.
+ <_>
+
+ <_>
+ 0 3 4 15 -1.
+ <_>
+ 0 8 4 5 3.
+ <_>
+
+ <_>
+ 15 0 8 12 -1.
+ <_>
+ 19 0 4 6 2.
+ <_>
+ 15 6 4 6 2.
+ <_>
+
+ <_>
+ 1 0 8 12 -1.
+ <_>
+ 1 0 4 6 2.
+ <_>
+ 5 6 4 6 2.
+ <_>
+
+ <_>
+ 14 5 6 16 -1.
+ <_>
+ 16 5 2 16 3.
+ <_>
+
+ <_>
+ 4 5 6 16 -1.
+ <_>
+ 6 5 2 16 3.
+ <_>
+
+ <_>
+ 15 0 6 16 -1.
+ <_>
+ 17 0 2 16 3.
+ <_>
+
+ <_>
+ 3 0 6 16 -1.
+ <_>
+ 5 0 2 16 3.
+ <_>
+
+ <_>
+ 0 2 24 3 -1.
+ <_>
+ 0 3 24 1 3.
+ <_>
+
+ <_>
+ 7 1 10 4 -1.
+ <_>
+ 7 3 10 2 2.
+ <_>
+
+ <_>
+ 1 0 23 8 -1.
+ <_>
+ 1 4 23 4 2.
+ <_>
+
+ <_>
+ 1 17 19 3 -1.
+ <_>
+ 1 18 19 1 3.
+ <_>
+
+ <_>
+ 6 18 18 2 -1.
+ <_>
+ 6 19 18 1 2.
+ <_>
+
+ <_>
+ 1 17 9 6 -1.
+ <_>
+ 1 19 9 2 3.
+ <_>
+
+ <_>
+ 15 15 6 9 -1.
+ <_>
+ 15 18 6 3 3.
+ <_>
+
+ <_>
+ 3 15 6 9 -1.
+ <_>
+ 3 18 6 3 3.
+ <_>
+
+ <_>
+ 4 14 20 6 -1.
+ <_>
+ 4 17 20 3 2.
+ <_>
+
+ <_>
+ 0 10 6 14 -1.
+ <_>
+ 0 10 3 7 2.
+ <_>
+ 3 17 3 7 2.
+ <_>
+
+ <_>
+ 6 18 18 3 -1.
+ <_>
+ 6 19 18 1 3.
+ <_>
+
+ <_>
+ 4 12 9 7 -1.
+ <_>
+ 7 12 3 7 3.
+ <_>
+
+ <_>
+ 6 10 18 5 -1.
+ <_>
+ 12 10 6 5 3.
+ <_>
+
+ <_>
+ 0 10 18 5 -1.
+ <_>
+ 6 10 6 5 3.
+ <_>
+
+ <_>
+ 3 2 18 9 -1.
+ <_>
+ 9 2 6 9 3.
+ <_>
+
+ <_>
+ 4 6 10 10 -1.
+ <_>
+ 4 6 5 5 2.
+ <_>
+ 9 11 5 5 2.
+ <_>
+
+ <_>
+ 20 14 4 9 -1.
+ <_>
+ 20 14 2 9 2.
+ <_>
+
+ <_>
+ 0 14 4 9 -1.
+ <_>
+ 2 14 2 9 2.
+ <_>
+
+ <_>
+ 11 1 4 20 -1.
+ <_>
+ 13 1 2 10 2.
+ <_>
+ 11 11 2 10 2.
+ <_>
+
+ <_>
+ 6 21 12 3 -1.
+ <_>
+ 12 21 6 3 2.
+ <_>
+
+ <_>
+ 11 1 4 20 -1.
+ <_>
+ 13 1 2 10 2.
+ <_>
+ 11 11 2 10 2.
+ <_>
+
+ <_>
+ 1 16 10 8 -1.
+ <_>
+ 1 16 5 4 2.
+ <_>
+ 6 20 5 4 2.
+ <_>
+
+ <_>
+ 11 1 4 20 -1.
+ <_>
+ 13 1 2 10 2.
+ <_>
+ 11 11 2 10 2.
+ <_>
+
+ <_>
+ 1 0 3 19 -1.
+ <_>
+ 2 0 1 19 3.
+ <_>
+
+ <_>
+ 11 1 4 20 -1.
+ <_>
+ 13 1 2 10 2.
+ <_>
+ 11 11 2 10 2.
+ <_>
+
+ <_>
+ 0 1 6 9 -1.
+ <_>
+ 2 1 2 9 3.
+ <_>
+
+ <_>
+ 3 7 19 4 -1.
+ <_>
+ 3 9 19 2 2.
+ <_>
+
+ <_>
+ 7 14 9 6 -1.
+ <_>
+ 7 16 9 2 3.
+ <_>
+
+ <_>
+ 17 1 7 6 -1.
+ <_>
+ 17 4 7 3 2.
+ <_>
+
+ <_>
+ 5 0 14 8 -1.
+ <_>
+ 5 4 14 4 2.
+ <_>
+
+ <_>
+ 16 1 8 6 -1.
+ <_>
+ 16 4 8 3 2.
+ <_>
+
+ <_>
+ 0 1 8 6 -1.
+ <_>
+ 0 4 8 3 2.
+ <_>
+
+ <_>
+ 6 0 18 4 -1.
+ <_>
+ 15 0 9 2 2.
+ <_>
+ 6 2 9 2 2.
+ <_>
+
+ <_>
+ 0 14 9 6 -1.
+ <_>
+ 0 16 9 2 3.
+ <_>
+
+ <_>
+ 3 7 18 8 -1.
+ <_>
+ 9 7 6 8 3.
+ <_>
+
+ <_>
+ 2 11 6 9 -1.
+ <_>
+ 4 11 2 9 3.
+ <_>
+
+ <_>
+ 10 5 6 9 -1.
+ <_>
+ 12 5 2 9 3.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 10 6 2 9 2.
+ <_>
+ 12 15 2 9 2.
+ <_>
+
+ <_>
+ 11 1 4 20 -1.
+ <_>
+ 13 1 2 10 2.
+ <_>
+ 11 11 2 10 2.
+ <_>
+
+ <_>
+ 9 1 4 20 -1.
+ <_>
+ 9 1 2 10 2.
+ <_>
+ 11 11 2 10 2.
+ <_>
+
+ <_>
+ 5 9 18 6 -1.
+ <_>
+ 14 9 9 3 2.
+ <_>
+ 5 12 9 3 2.
+ <_>
+
+ <_>
+ 6 4 6 9 -1.
+ <_>
+ 8 4 2 9 3.
+ <_>
+
+ <_>
+ 10 16 8 6 -1.
+ <_>
+ 10 16 4 6 2.
+ <_>
+
+ <_>
+ 0 0 18 8 -1.
+ <_>
+ 0 0 9 4 2.
+ <_>
+ 9 4 9 4 2.
+ <_>
+
+ <_>
+ 6 5 14 12 -1.
+ <_>
+ 13 5 7 6 2.
+ <_>
+ 6 11 7 6 2.
+ <_>
+
+ <_>
+ 4 3 15 7 -1.
+ <_>
+ 9 3 5 7 3.
+ <_>
+
+ <_>
+ 14 12 10 6 -1.
+ <_>
+ 14 14 10 2 3.
+ <_>
+
+ <_>
+ 0 11 4 10 -1.
+ <_>
+ 0 16 4 5 2.
+ <_>
+
+ <_>
+ 1 10 22 3 -1.
+ <_>
+ 1 11 22 1 3.
+ <_>
+
+ <_>
+ 8 9 6 10 -1.
+ <_>
+ 10 9 2 10 3.
+ <_>
+
+ <_>
+ 13 2 6 12 -1.
+ <_>
+ 16 2 3 6 2.
+ <_>
+ 13 8 3 6 2.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 10 6 2 9 2.
+ <_>
+ 12 15 2 9 2.
+ <_>
+
+ <_>
+ 7 8 10 16 -1.
+ <_>
+ 12 8 5 8 2.
+ <_>
+ 7 16 5 8 2.
+ <_>
+
+ <_>
+ 8 1 8 12 -1.
+ <_>
+ 8 1 4 6 2.
+ <_>
+ 12 7 4 6 2.
+ <_>
+
+ <_>
+ 7 1 12 14 -1.
+ <_>
+ 13 1 6 7 2.
+ <_>
+ 7 8 6 7 2.
+ <_>
+
+ <_>
+ 2 14 12 6 -1.
+ <_>
+ 2 16 12 2 3.
+ <_>
+
+ <_>
+ 11 16 6 6 -1.
+ <_>
+ 11 19 6 3 2.
+ <_>
+
+ <_>
+ 7 16 6 6 -1.
+ <_>
+ 7 19 6 3 2.
+ <_>
+
+ <_>
+ 13 4 4 10 -1.
+ <_>
+ 13 4 2 10 2.
+ <_>
+
+ <_>
+ 0 19 19 3 -1.
+ <_>
+ 0 20 19 1 3.
+ <_>
+
+ <_>
+ 12 8 6 8 -1.
+ <_>
+ 12 12 6 4 2.
+ <_>
+
+ <_>
+ 8 1 8 22 -1.
+ <_>
+ 8 12 8 11 2.
+ <_>
+
+ <_>
+ 12 8 6 8 -1.
+ <_>
+ 12 12 6 4 2.
+ <_>
+
+ <_>
+ 6 8 6 8 -1.
+ <_>
+ 6 12 6 4 2.
+ <_>
+
+ <_>
+ 14 5 6 9 -1.
+ <_>
+ 14 8 6 3 3.
+ <_>
+
+ <_>
+ 0 6 24 4 -1.
+ <_>
+ 0 8 24 2 2.
+ <_>
+
+ <_>
+ 14 12 10 6 -1.
+ <_>
+ 14 14 10 2 3.
+ <_>
+
+ <_>
+ 0 12 10 6 -1.
+ <_>
+ 0 14 10 2 3.
+ <_>
+
+ <_>
+ 4 6 19 3 -1.
+ <_>
+ 4 7 19 1 3.
+ <_>
+
+ <_>
+ 1 6 19 3 -1.
+ <_>
+ 1 7 19 1 3.
+ <_>
+
+ <_>
+ 4 0 16 9 -1.
+ <_>
+ 4 3 16 3 3.
+ <_>
+
+ <_>
+ 0 1 24 5 -1.
+ <_>
+ 8 1 8 5 3.
+ <_>
+
+ <_>
+ 3 6 6 15 -1.
+ <_>
+ 3 11 6 5 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 6 22 18 2 -1.
+ <_>
+ 6 23 18 1 2.
+ <_>
+
+ <_>
+ 2 12 6 9 -1.
+ <_>
+ 2 15 6 3 3.
+ <_>
+
+ <_>
+ 18 12 6 9 -1.
+ <_>
+ 18 15 6 3 3.
+ <_>
+
+ <_>
+ 0 12 6 9 -1.
+ <_>
+ 0 15 6 3 3.
+ <_>
+
+ <_>
+ 11 14 4 10 -1.
+ <_>
+ 11 19 4 5 2.
+ <_>
+
+ <_>
+ 9 6 6 16 -1.
+ <_>
+ 9 14 6 8 2.
+ <_>
+
+ <_>
+ 7 7 10 10 -1.
+ <_>
+ 7 12 10 5 2.
+ <_>
+
+ <_>
+ 1 3 6 13 -1.
+ <_>
+ 3 3 2 13 3.
+ <_>
+
+ <_>
+ 18 1 6 13 -1.
+ <_>
+ 18 1 3 13 2.
+ <_>
+
+ <_>
+ 5 1 6 9 -1.
+ <_>
+ 7 1 2 9 3.
+ <_>
+
+ <_>
+ 18 2 6 11 -1.
+ <_>
+ 18 2 3 11 2.
+ <_>
+
+ <_>
+ 0 2 6 11 -1.
+ <_>
+ 3 2 3 11 2.
+ <_>
+
+ <_>
+ 9 12 15 6 -1.
+ <_>
+ 9 14 15 2 3.
+ <_>
+
+ <_>
+ 2 2 20 3 -1.
+ <_>
+ 2 3 20 1 3.
+ <_>
+
+ <_>
+ 10 6 4 9 -1.
+ <_>
+ 10 6 2 9 2.
+ <_>
+
+ <_>
+ 5 6 12 14 -1.
+ <_>
+ 5 6 6 7 2.
+ <_>
+ 11 13 6 7 2.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 11 0 2 9 3.
+ <_>
+
+ <_>
+ 7 0 9 6 -1.
+ <_>
+ 10 0 3 6 3.
+ <_>
+
+ <_>
+ 10 6 6 9 -1.
+ <_>
+ 12 6 2 9 3.
+ <_>
+
+ <_>
+ 4 1 12 20 -1.
+ <_>
+ 4 1 6 10 2.
+ <_>
+ 10 11 6 10 2.
+ <_>
+
+ <_>
+ 6 7 18 3 -1.
+ <_>
+ 6 7 9 3 2.
+ <_>
+
+ <_>
+ 0 7 18 3 -1.
+ <_>
+ 9 7 9 3 2.
+ <_>
+
+ <_>
+ 3 20 18 3 -1.
+ <_>
+ 9 20 6 3 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 6 2 12 15 -1.
+ <_>
+ 10 2 4 15 3.
+ <_>
+
+ <_>
+ 2 3 18 3 -1.
+ <_>
+ 2 4 18 1 3.
+ <_>
+
+ <_>
+ 19 4 4 18 -1.
+ <_>
+ 21 4 2 9 2.
+ <_>
+ 19 13 2 9 2.
+ <_>
+
+ <_>
+ 0 1 19 3 -1.
+ <_>
+ 0 2 19 1 3.
+ <_>
+
+ <_>
+ 5 0 15 4 -1.
+ <_>
+ 5 2 15 2 2.
+ <_>
+
+ <_>
+ 5 2 14 5 -1.
+ <_>
+ 12 2 7 5 2.
+ <_>
+
+ <_>
+ 1 2 22 14 -1.
+ <_>
+ 1 2 11 14 2.
+ <_>
+
+ <_>
+ 8 15 6 9 -1.
+ <_>
+ 10 15 2 9 3.
+ <_>
+
+ <_>
+ 6 17 18 3 -1.
+ <_>
+ 6 18 18 1 3.
+ <_>
+
+ <_>
+ 9 6 3 18 -1.
+ <_>
+ 9 12 3 6 3.
+ <_>
+
+ <_>
+ 2 0 20 3 -1.
+ <_>
+ 2 1 20 1 3.
+ <_>
+
+ <_>
+ 5 4 5 12 -1.
+ <_>
+ 5 8 5 4 3.
+ <_>
+
+ <_>
+ 8 6 12 5 -1.
+ <_>
+ 12 6 4 5 3.
+ <_>
+
+ <_>
+ 9 12 6 12 -1.
+ <_>
+ 9 12 3 6 2.
+ <_>
+ 12 18 3 6 2.
+ <_>
+
+ <_>
+ 14 14 8 10 -1.
+ <_>
+ 18 14 4 5 2.
+ <_>
+ 14 19 4 5 2.
+ <_>
+
+ <_>
+ 2 14 8 10 -1.
+ <_>
+ 2 14 4 5 2.
+ <_>
+ 6 19 4 5 2.
+ <_>
+
+ <_>
+ 10 18 12 6 -1.
+ <_>
+ 16 18 6 3 2.
+ <_>
+ 10 21 6 3 2.
+ <_>
+
+ <_>
+ 1 3 6 9 -1.
+ <_>
+ 1 6 6 3 3.
+ <_>
+
+ <_>
+ 11 3 3 20 -1.
+ <_>
+ 12 3 1 20 3.
+ <_>
+
+ <_>
+ 4 6 14 6 -1.
+ <_>
+ 4 6 7 3 2.
+ <_>
+ 11 9 7 3 2.
+ <_>
+
+ <_>
+ 6 5 12 13 -1.
+ <_>
+ 10 5 4 13 3.
+ <_>
+
+ <_>
+ 5 4 4 15 -1.
+ <_>
+ 5 9 4 5 3.
+ <_>
+
+ <_>
+ 9 16 15 4 -1.
+ <_>
+ 14 16 5 4 3.
+ <_>
+
+ <_>
+ 7 8 6 14 -1.
+ <_>
+ 7 8 3 7 2.
+ <_>
+ 10 15 3 7 2.
+ <_>
+
+ <_>
+ 7 6 10 6 -1.
+ <_>
+ 7 8 10 2 3.
+ <_>
+
+ <_>
+ 2 5 18 3 -1.
+ <_>
+ 2 6 18 1 3.
+ <_>
+
+ <_>
+ 5 1 15 8 -1.
+ <_>
+ 5 5 15 4 2.
+ <_>
+
+ <_>
+ 7 1 8 18 -1.
+ <_>
+ 7 10 8 9 2.
+ <_>
+
+ <_>
+ 0 10 24 3 -1.
+ <_>
+ 0 11 24 1 3.
+ <_>
+
+ <_>
+ 0 2 6 13 -1.
+ <_>
+ 2 2 2 13 3.
+ <_>
+
+ <_>
+ 16 0 8 10 -1.
+ <_>
+ 20 0 4 5 2.
+ <_>
+ 16 5 4 5 2.
+ <_>
+
+ <_>
+ 5 1 10 9 -1.
+ <_>
+ 5 4 10 3 3.
+ <_>
+
+ <_>
+ 5 6 18 3 -1.
+ <_>
+ 5 7 18 1 3.
+ <_>
+
+ <_>
+ 0 1 24 3 -1.
+ <_>
+ 0 2 24 1 3.
+ <_>
+
+ <_>
+ 11 4 6 11 -1.
+ <_>
+ 13 4 2 11 3.
+ <_>
+
+ <_>
+ 0 0 8 10 -1.
+ <_>
+ 0 0 4 5 2.
+ <_>
+ 4 5 4 5 2.
+ <_>
+
+ <_>
+ 4 16 18 3 -1.
+ <_>
+ 4 17 18 1 3.
+ <_>
+
+ <_>
+ 2 16 18 3 -1.
+ <_>
+ 2 17 18 1 3.
+ <_>
+
+ <_>
+ 3 0 18 10 -1.
+ <_>
+ 12 0 9 5 2.
+ <_>
+ 3 5 9 5 2.
+ <_>
+
+ <_>
+ 2 3 20 21 -1.
+ <_>
+ 12 3 10 21 2.
+ <_>
+
+ <_>
+ 6 7 14 3 -1.
+ <_>
+ 6 7 7 3 2.
+ <_>
+
+ <_>
+ 0 9 12 6 -1.
+ <_>
+ 0 9 6 3 2.
+ <_>
+ 6 12 6 3 2.
+ <_>
+
+ <_>
+ 3 14 21 4 -1.
+ <_>
+ 10 14 7 4 3.
+ <_>
+
+ <_>
+ 0 14 21 4 -1.
+ <_>
+ 7 14 7 4 3.
+ <_>
+
+ <_>
+ 5 21 18 3 -1.
+ <_>
+ 11 21 6 3 3.
+ <_>
+
+ <_>
+ 1 21 18 3 -1.
+ <_>
+ 7 21 6 3 3.
+ <_>
+
+ <_>
+ 19 4 4 18 -1.
+ <_>
+ 21 4 2 9 2.
+ <_>
+ 19 13 2 9 2.
+ <_>
+
+ <_>
+ 3 7 18 3 -1.
+ <_>
+ 3 8 18 1 3.
+ <_>
+
+ <_>
+ 19 4 4 18 -1.
+ <_>
+ 21 4 2 9 2.
+ <_>
+ 19 13 2 9 2.
+ <_>
+
+ <_>
+ 7 15 10 6 -1.
+ <_>
+ 7 17 10 2 3.
+ <_>
+
+ <_>
+ 9 13 11 9 -1.
+ <_>
+ 9 16 11 3 3.
+ <_>
+
+ <_>
+ 0 6 4 10 -1.
+ <_>
+ 0 11 4 5 2.
+ <_>
+
+ <_>
+ 15 16 9 6 -1.
+ <_>
+ 15 18 9 2 3.
+ <_>
+
+ <_>
+ 1 5 4 18 -1.
+ <_>
+ 1 5 2 9 2.
+ <_>
+ 3 14 2 9 2.
+ <_>
+
+ <_>
+ 9 8 8 10 -1.
+ <_>
+ 13 8 4 5 2.
+ <_>
+ 9 13 4 5 2.
+ <_>
+
+ <_>
+ 7 8 8 10 -1.
+ <_>
+ 7 8 4 5 2.
+ <_>
+ 11 13 4 5 2.
+ <_>
+
+ <_>
+ 9 8 12 5 -1.
+ <_>
+ 13 8 4 5 3.
+ <_>
+
+ <_>
+ 7 8 9 7 -1.
+ <_>
+ 10 8 3 7 3.
+ <_>
+
+ <_>
+ 9 8 12 5 -1.
+ <_>
+ 13 8 4 5 3.
+ <_>
+
+ <_>
+ 7 6 9 7 -1.
+ <_>
+ 10 6 3 7 3.
+ <_>
+
+ <_>
+ 9 8 12 5 -1.
+ <_>
+ 13 8 4 5 3.
+ <_>
+
+ <_>
+ 10 5 4 18 -1.
+ <_>
+ 10 11 4 6 3.
+ <_>
+
+ <_>
+ 5 5 14 12 -1.
+ <_>
+ 5 11 14 6 2.
+ <_>
+
+ <_>
+ 0 1 11 4 -1.
+ <_>
+ 0 3 11 2 2.
+ <_>
+
+ <_>
+ 9 10 6 10 -1.
+ <_>
+ 11 10 2 10 3.
+ <_>
+
+ <_>
+ 2 17 11 6 -1.
+ <_>
+ 2 19 11 2 3.
+ <_>
+
+ <_>
+ 15 16 9 6 -1.
+ <_>
+ 15 18 9 2 3.
+ <_>
+
+ <_>
+ 1 10 18 2 -1.
+ <_>
+ 1 11 18 1 2.
+ <_>
+
+ <_>
+ 6 4 12 13 -1.
+ <_>
+ 10 4 4 13 3.
+ <_>
+
+ <_>
+ 0 18 18 3 -1.
+ <_>
+ 0 19 18 1 3.
+ <_>
+
+ <_>
+ 6 18 18 3 -1.
+ <_>
+ 6 19 18 1 3.
+ <_>
+
+ <_>
+ 0 16 9 6 -1.
+ <_>
+ 0 18 9 2 3.
+ <_>
+
+ <_>
+ 13 15 9 6 -1.
+ <_>
+ 13 17 9 2 3.
+ <_>
+
+ <_>
+ 2 15 9 6 -1.
+ <_>
+ 2 17 9 2 3.
+ <_>
+
+ <_>
+ 13 1 6 16 -1.
+ <_>
+ 13 1 3 16 2.
+ <_>
+
+ <_>
+ 5 1 6 16 -1.
+ <_>
+ 8 1 3 16 2.
+ <_>
+
+ <_>
+ 11 5 6 10 -1.
+ <_>
+ 13 5 2 10 3.
+ <_>
+
+ <_>
+ 7 5 6 10 -1.
+ <_>
+ 9 5 2 10 3.
+ <_>
+
+ <_>
+ 10 0 6 24 -1.
+ <_>
+ 12 0 2 24 3.
+ <_>
+
+ <_>
+ 3 4 4 20 -1.
+ <_>
+ 3 4 2 10 2.
+ <_>
+ 5 14 2 10 2.
+ <_>
+
+ <_>
+ 14 0 6 9 -1.
+ <_>
+ 16 0 2 9 3.
+ <_>
+
+ <_>
+ 4 0 6 9 -1.
+ <_>
+ 6 0 2 9 3.
+ <_>
+
+ <_>
+ 4 5 18 5 -1.
+ <_>
+ 10 5 6 5 3.
+ <_>
+
+ <_>
+ 5 6 6 9 -1.
+ <_>
+ 7 6 2 9 3.
+ <_>
+
+ <_>
+ 7 2 15 8 -1.
+ <_>
+ 12 2 5 8 3.
+ <_>
+
+ <_>
+ 2 2 15 8 -1.
+ <_>
+ 7 2 5 8 3.
+ <_>
+
+ <_>
+ 10 0 4 9 -1.
+ <_>
+ 10 0 2 9 2.
+ <_>
+
+ <_>
+ 3 4 6 12 -1.
+ <_>
+ 3 4 3 6 2.
+ <_>
+ 6 10 3 6 2.
+ <_>
+
+ <_>
+ 16 0 8 18 -1.
+ <_>
+ 16 0 4 18 2.
+ <_>
+
+ <_>
+ 0 0 8 18 -1.
+ <_>
+ 4 0 4 18 2.
+ <_>
+
+ <_>
+ 0 7 24 6 -1.
+ <_>
+ 0 9 24 2 3.
+ <_>
+
+ <_>
+ 4 7 14 3 -1.
+ <_>
+ 11 7 7 3 2.
+ <_>
+
+ <_>
+ 10 8 8 15 -1.
+ <_>
+ 10 8 4 15 2.
+ <_>
+
+ <_>
+ 7 0 10 14 -1.
+ <_>
+ 12 0 5 14 2.
+ <_>
+
+ <_>
+ 13 10 8 10 -1.
+ <_>
+ 17 10 4 5 2.
+ <_>
+ 13 15 4 5 2.
+ <_>
+
+ <_>
+ 3 0 4 9 -1.
+ <_>
+ 5 0 2 9 2.
+ <_>
+
+ <_>
+ 16 1 6 8 -1.
+ <_>
+ 16 1 3 8 2.
+ <_>
+
+ <_>
+ 2 1 6 8 -1.
+ <_>
+ 5 1 3 8 2.
+ <_>
+
+ <_>
+ 3 6 18 12 -1.
+ <_>
+ 3 10 18 4 3.
+ <_>
+
+ <_>
+ 4 12 16 4 -1.
+ <_>
+ 4 14 16 2 2.
+ <_>
+
+ <_>
+ 4 9 16 15 -1.
+ <_>
+ 4 14 16 5 3.
+ <_>
+
+ <_>
+ 3 10 8 10 -1.
+ <_>
+ 3 10 4 5 2.
+ <_>
+ 7 15 4 5 2.
+ <_>
+
+ <_>
+ 8 18 16 6 -1.
+ <_>
+ 16 18 8 3 2.
+ <_>
+ 8 21 8 3 2.
+ <_>
+
+ <_>
+ 2 16 12 5 -1.
+ <_>
+ 6 16 4 5 3.
+ <_>
+
+ <_>
+ 14 14 9 4 -1.
+ <_>
+ 14 16 9 2 2.
+ <_>
+
+ <_>
+ 7 14 9 6 -1.
+ <_>
+ 7 16 9 2 3.
+ <_>
+
+ <_>
+ 4 10 16 12 -1.
+ <_>
+ 4 14 16 4 3.
+ <_>
+
+ <_>
+ 0 13 19 6 -1.
+ <_>
+ 0 15 19 2 3.
+ <_>
+
+ <_>
+ 10 13 9 6 -1.
+ <_>
+ 10 15 9 2 3.
+ <_>
+
+ <_>
+ 5 0 3 23 -1.
+ <_>
+ 6 0 1 23 3.
+ <_>
+
+ <_>
+ 0 8 24 6 -1.
+ <_>
+ 0 10 24 2 3.
+ <_>
+
+ <_>
+ 0 5 5 12 -1.
+ <_>
+ 0 9 5 4 3.
+ <_>
+
+ <_>
+ 3 0 19 18 -1.
+ <_>
+ 3 9 19 9 2.
+ <_>
+
+ <_>
+ 9 11 6 12 -1.
+ <_>
+ 9 11 3 6 2.
+ <_>
+ 12 17 3 6 2.
+ <_>
+
+ <_>
+ 0 5 24 8 -1.
+ <_>
+ 12 5 12 4 2.
+ <_>
+ 0 9 12 4 2.
+ <_>
+
+ <_>
+ 6 18 9 4 -1.
+ <_>
+ 6 20 9 2 2.
+ <_>
+
+ <_>
+ 8 8 10 6 -1.
+ <_>
+ 8 10 10 2 3.
+ <_>
+
+ <_>
+ 2 7 20 3 -1.
+ <_>
+ 2 8 20 1 3.
+ <_>
+
+ <_>
+ 12 0 7 20 -1.
+ <_>
+ 12 10 7 10 2.
+ <_>
+
+ <_>
+ 5 0 7 20 -1.
+ <_>
+ 5 10 7 10 2.
+ <_>
+
+ <_>
+ 14 2 2 18 -1.
+ <_>
+ 14 11 2 9 2.
+ <_>
+
+ <_>
+ 5 8 10 12 -1.
+ <_>
+ 10 8 5 12 2.
+ <_>
+
+ <_>
+ 6 9 12 8 -1.
+ <_>
+ 12 9 6 4 2.
+ <_>
+ 6 13 6 4 2.
+ <_>
+
+ <_>
+ 7 7 3 14 -1.
+ <_>
+ 7 14 3 7 2.
+ <_>
+
+ <_>
+ 11 2 12 16 -1.
+ <_>
+ 17 2 6 8 2.
+ <_>
+ 11 10 6 8 2.
+ <_>
+
+ <_>
+ 7 0 6 9 -1.
+ <_>
+ 9 0 2 9 3.
+ <_>
+
+ <_>
+ 13 14 9 4 -1.
+ <_>
+ 13 16 9 2 2.
+ <_>
+
+ <_>
+ 0 12 22 4 -1.
+ <_>
+ 0 12 11 2 2.
+ <_>
+ 11 14 11 2 2.
+ <_>
+
+ <_>
+ 1 12 22 6 -1.
+ <_>
+ 12 12 11 3 2.
+ <_>
+ 1 15 11 3 2.
+ <_>
+
+ <_>
+ 6 6 9 6 -1.
+ <_>
+ 9 6 3 6 3.
+ <_>
+
+ <_>
+ 10 0 4 9 -1.
+ <_>
+ 10 0 2 9 2.
+ <_>
+
+ <_>
+ 3 8 18 7 -1.
+ <_>
+ 9 8 6 7 3.
+ <_>
+
+ <_>
+ 0 6 24 6 -1.
+ <_>
+ 0 8 24 2 3.
+ <_>
+
+ <_>
+ 0 11 24 10 -1.
+ <_>
+ 8 11 8 10 3.
+ <_>
+
+ <_>
+ 3 3 18 21 -1.
+ <_>
+ 9 3 6 21 3.
+ <_>
+
+ <_>
+ 7 12 4 10 -1.
+ <_>
+ 9 12 2 10 2.
+ <_>
+
+ <_>
+ 10 16 10 8 -1.
+ <_>
+ 15 16 5 4 2.
+ <_>
+ 10 20 5 4 2.
+ <_>
+
+ <_>
+ 8 6 6 9 -1.
+ <_>
+ 10 6 2 9 3.
+ <_>
+
+ <_>
+ 12 10 6 12 -1.
+ <_>
+ 15 10 3 6 2.
+ <_>
+ 12 16 3 6 2.
+ <_>
+
+ <_>
+ 6 10 6 12 -1.
+ <_>
+ 6 10 3 6 2.
+ <_>
+ 9 16 3 6 2.
+ <_>
+
+ <_>
+ 16 12 6 12 -1.
+ <_>
+ 19 12 3 6 2.
+ <_>
+ 16 18 3 6 2.
+ <_>
+
+ <_>
+ 2 12 6 12 -1.
+ <_>
+ 2 12 3 6 2.
+ <_>
+ 5 18 3 6 2.
+ <_>
+
+ <_>
+ 10 15 6 9 -1.
+ <_>
+ 12 15 2 9 3.
+ <_>
+
+ <_>
+ 8 15 6 9 -1.
+ <_>
+ 10 15 2 9 3.
+ <_>
+
+ <_>
+ 14 20 10 4 -1.
+ <_>
+ 14 20 5 4 2.
+ <_>
+
+ <_>
+ 0 20 10 4 -1.
+ <_>
+ 5 20 5 4 2.
+ <_>
+
+ <_>
+ 11 17 9 6 -1.
+ <_>
+ 11 19 9 2 3.
+ <_>
+
+ <_>
+ 3 2 14 4 -1.
+ <_>
+ 3 4 14 2 2.
+ <_>
+
+ <_>
+ 10 1 10 4 -1.
+ <_>
+ 10 3 10 2 2.
+ <_>
+
+ <_>
+ 0 15 10 4 -1.
+ <_>
+ 5 15 5 4 2.
+ <_>
+
+ <_>
+ 19 2 3 19 -1.
+ <_>
+ 20 2 1 19 3.
+ <_>
+
+ <_>
+ 4 12 9 8 -1.
+ <_>
+ 7 12 3 8 3.
+ <_>
+
+ <_>
+ 4 7 5 12 -1.
+ <_>
+ 4 11 5 4 3.
+ <_>
+
+ <_>
+ 0 1 24 3 -1.
+ <_>
+ 8 1 8 3 3.
+ <_>
+
+ <_>
+ 6 8 12 4 -1.
+ <_>
+ 6 10 12 2 2.
+ <_>
+
+ <_>
+ 19 3 4 10 -1.
+ <_>
+ 19 3 2 10 2.
+ <_>
+
+ <_>
+ 0 6 9 6 -1.
+ <_>
+ 3 6 3 6 3.
+ <_>
+
+ <_>
+ 18 0 6 22 -1.
+ <_>
+ 20 0 2 22 3.
+ <_>
+
+ <_>
+ 0 0 6 22 -1.
+ <_>
+ 2 0 2 22 3.
+ <_>
+
+ <_>
+ 5 15 19 3 -1.
+ <_>
+ 5 16 19 1 3.
+ <_>
+
+ <_>
+ 10 7 4 15 -1.
+ <_>
+ 10 12 4 5 3.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 0 21 18 3 -1.
+ <_>
+ 0 22 18 1 3.
+ <_>
+
+ <_>
+ 7 3 10 15 -1.
+ <_>
+ 7 8 10 5 3.
+ <_>
+
+ <_>
+ 1 7 18 3 -1.
+ <_>
+ 1 8 18 1 3.
+ <_>
+
+ <_>
+ 8 2 9 6 -1.
+ <_>
+ 11 2 3 6 3.
+ <_>
+
+ <_>
+ 0 10 24 14 -1.
+ <_>
+ 0 17 24 7 2.
+ <_>
+
+ <_>
+ 13 9 8 10 -1.
+ <_>
+ 17 9 4 5 2.
+ <_>
+ 13 14 4 5 2.
+ <_>
+
+ <_>
+ 10 5 4 9 -1.
+ <_>
+ 12 5 2 9 2.
+ <_>
+
+ <_>
+ 13 9 8 10 -1.
+ <_>
+ 17 9 4 5 2.
+ <_>
+ 13 14 4 5 2.
+ <_>
+
+ <_>
+ 7 11 10 10 -1.
+ <_>
+ 7 11 5 5 2.
+ <_>
+ 12 16 5 5 2.
+ <_>
+
+ <_>
+ 4 13 18 4 -1.
+ <_>
+ 13 13 9 2 2.
+ <_>
+ 4 15 9 2 2.
+ <_>
+
+ <_>
+ 0 0 19 2 -1.
+ <_>
+ 0 1 19 1 2.
+ <_>
+
+ <_>
+ 0 18 24 6 -1.
+ <_>
+ 8 18 8 6 3.
+ <_>
+
+ <_>
+ 6 4 8 16 -1.
+ <_>
+ 6 12 8 8 2.
+ <_>
+
+ <_>
+ 7 8 10 4 -1.
+ <_>
+ 7 10 10 2 2.
+ <_>
+
+ <_>
+ 0 3 6 9 -1.
+ <_>
+ 0 6 6 3 3.
+ <_>
+
+ <_>
+ 13 15 7 9 -1.
+ <_>
+ 13 18 7 3 3.
+ <_>
+
+ <_>
+ 3 18 12 6 -1.
+ <_>
+ 3 18 6 3 2.
+ <_>
+ 9 21 6 3 2.
+ <_>
+
+ <_>
+ 12 14 6 9 -1.
+ <_>
+ 12 17 6 3 3.
+ <_>
+
+ <_>
+ 2 15 15 8 -1.
+ <_>
+ 2 19 15 4 2.
+ <_>
+
+ <_>
+ 9 6 6 16 -1.
+ <_>
+ 9 14 6 8 2.
+ <_>
+
+ <_>
+ 6 6 7 12 -1.
+ <_>
+ 6 10 7 4 3.
+ <_>
+
+ <_>
+ 14 6 6 9 -1.
+ <_>
+ 14 9 6 3 3.
+ <_>
+
+ <_>
+ 5 14 6 9 -1.
+ <_>
+ 5 17 6 3 3.
+ <_>
+
+ <_>
+ 10 8 6 9 -1.
+ <_>
+ 12 8 2 9 3.
+ <_>
+
+ <_>
+ 6 6 4 18 -1.
+ <_>
+ 6 6 2 9 2.
+ <_>
+ 8 15 2 9 2.
+ <_>
+
+ <_>
+ 14 9 6 12 -1.
+ <_>
+ 17 9 3 6 2.
+ <_>
+ 14 15 3 6 2.
+ <_>
+
+ <_>
+ 4 9 6 12 -1.
+ <_>
+ 4 9 3 6 2.
+ <_>
+ 7 15 3 6 2.
+ <_>
+
+ <_>
+ 14 15 9 6 -1.
+ <_>
+ 14 17 9 2 3.
+ <_>
+
+ <_>
+ 0 20 18 4 -1.
+ <_>
+ 0 20 9 2 2.
+ <_>
+ 9 22 9 2 2.
+ <_>
+
+ <_>
+ 13 18 9 6 -1.
+ <_>
+ 13 20 9 2 3.
+ <_>
+
+ <_>
+ 2 18 9 6 -1.
+ <_>
+ 2 20 9 2 3.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 0 16 18 3 -1.
+ <_>
+ 0 17 18 1 3.
+ <_>
+
+ <_>
+ 19 2 4 22 -1.
+ <_>
+ 21 2 2 11 2.
+ <_>
+ 19 13 2 11 2.
+ <_>
+
+ <_>
+ 1 2 4 22 -1.
+ <_>
+ 1 2 2 11 2.
+ <_>
+ 3 13 2 11 2.
+ <_>
+
+ <_>
+ 15 0 2 24 -1.
+ <_>
+ 15 0 1 24 2.
+ <_>
+
+ <_>
+ 3 20 16 4 -1.
+ <_>
+ 11 20 8 4 2.
+ <_>
+
+ <_>
+ 11 6 4 18 -1.
+ <_>
+ 13 6 2 9 2.
+ <_>
+ 11 15 2 9 2.
+ <_>
+
+ <_>
+ 7 9 10 14 -1.
+ <_>
+ 7 9 5 7 2.
+ <_>
+ 12 16 5 7 2.
+ <_>
+
+ <_>
+ 14 6 6 9 -1.
+ <_>
+ 14 9 6 3 3.
+ <_>
+
+ <_>
+ 3 6 7 9 -1.
+ <_>
+ 3 9 7 3 3.
+ <_>
+
+ <_>
+ 20 4 4 20 -1.
+ <_>
+ 22 4 2 10 2.
+ <_>
+ 20 14 2 10 2.
+ <_>
+
+ <_>
+ 7 6 6 9 -1.
+ <_>
+ 7 9 6 3 3.
+ <_>
+
+ <_>
+ 7 0 10 14 -1.
+ <_>
+ 12 0 5 7 2.
+ <_>
+ 7 7 5 7 2.
+ <_>
+
+ <_>
+ 2 1 18 6 -1.
+ <_>
+ 11 1 9 6 2.
+ <_>
+
+ <_>
+ 15 0 2 24 -1.
+ <_>
+ 15 0 1 24 2.
+ <_>
+
+ <_>
+ 7 0 2 24 -1.
+ <_>
+ 8 0 1 24 2.
+ <_>
+
+ <_>
+ 13 12 6 7 -1.
+ <_>
+ 13 12 3 7 2.
+ <_>
+
+ <_>
+ 5 12 6 7 -1.
+ <_>
+ 8 12 3 7 2.
+ <_>
+
+ <_>
+ 3 5 18 19 -1.
+ <_>
+ 9 5 6 19 3.
+ <_>
+
+ <_>
+ 5 6 9 6 -1.
+ <_>
+ 8 6 3 6 3.
+ <_>
+
+ <_>
+ 9 5 9 6 -1.
+ <_>
+ 12 5 3 6 3.
+ <_>
+
+ <_>
+ 3 16 10 8 -1.
+ <_>
+ 3 16 5 4 2.
+ <_>
+ 8 20 5 4 2.
+ <_>
+
+ <_>
+ 19 8 5 15 -1.
+ <_>
+ 19 13 5 5 3.
+ <_>
+
+ <_>
+ 0 8 5 15 -1.
+ <_>
+ 0 13 5 5 3.
+ <_>
+
+ <_>
+ 20 4 4 20 -1.
+ <_>
+ 22 4 2 10 2.
+ <_>
+ 20 14 2 10 2.
+ <_>
+
+ <_>
+ 0 4 4 20 -1.
+ <_>
+ 0 4 2 10 2.
+ <_>
+ 2 14 2 10 2.
+ <_>
+
+ <_>
+ 7 7 10 4 -1.
+ <_>
+ 7 7 5 4 2.
+ <_>
+
+ <_>
+ 4 19 14 4 -1.
+ <_>
+ 11 19 7 4 2.
+ <_>
+
+ <_>
+ 10 11 12 3 -1.
+ <_>
+ 10 11 6 3 2.
+ <_>
+
+ <_>
+ 0 1 24 3 -1.
+ <_>
+ 0 2 24 1 3.
+ <_>
+
+ <_>
+ 7 2 14 20 -1.
+ <_>
+ 14 2 7 10 2.
+ <_>
+ 7 12 7 10 2.
+ <_>
+
+ <_>
+ 0 13 6 9 -1.
+ <_>
+ 2 13 2 9 3.
+ <_>
+
+ <_>
+ 13 0 4 19 -1.
+ <_>
+ 13 0 2 19 2.
+ <_>
+
+ <_>
+ 1 11 14 3 -1.
+ <_>
+ 8 11 7 3 2.
+ <_>
+
+ <_>
+ 7 1 16 20 -1.
+ <_>
+ 15 1 8 10 2.
+ <_>
+ 7 11 8 10 2.
+ <_>
+
+ <_>
+ 0 10 21 9 -1.
+ <_>
+ 7 10 7 9 3.
+ <_>
+
+ <_>
+ 6 19 15 5 -1.
+ <_>
+ 11 19 5 5 3.
+ <_>
+
+ <_>
+ 8 10 6 6 -1.
+ <_>
+ 11 10 3 6 2.
+ <_>
+
+ <_>
+ 7 1 16 20 -1.
+ <_>
+ 15 1 8 10 2.
+ <_>
+ 7 11 8 10 2.
+ <_>
+
+ <_>
+ 1 1 16 20 -1.
+ <_>
+ 1 1 8 10 2.
+ <_>
+ 9 11 8 10 2.
+ <_>
+
+ <_>
+ 16 4 3 12 -1.
+ <_>
+ 16 10 3 6 2.
+ <_>
+
+ <_>
+ 5 4 3 12 -1.
+ <_>
+ 5 10 3 6 2.
+ <_>
+
+ <_>
+ 7 6 10 8 -1.
+ <_>
+ 12 6 5 4 2.
+ <_>
+ 7 10 5 4 2.
+ <_>
+
+ <_>
+ 4 9 6 6 -1.
+ <_>
+ 4 12 6 3 2.
+ <_>
+
+ <_>
+ 6 5 12 4 -1.
+ <_>
+ 6 7 12 2 2.
+ <_>
+
+ <_>
+ 9 2 5 15 -1.
+ <_>
+ 9 7 5 5 3.
+ <_>
+
+ <_>
+ 15 0 9 6 -1.
+ <_>
+ 15 2 9 2 3.
+ <_>
+
+ <_>
+ 6 0 11 10 -1.
+ <_>
+ 6 5 11 5 2.
+ <_>
+
+ <_>
+ 12 7 4 12 -1.
+ <_>
+ 12 13 4 6 2.
+ <_>
+
+ <_>
+ 7 2 9 4 -1.
+ <_>
+ 7 4 9 2 2.
+ <_>
+
+ <_>
+ 6 0 13 6 -1.
+ <_>
+ 6 2 13 2 3.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 10 6 2 9 2.
+ <_>
+ 12 15 2 9 2.
+ <_>
+
+ <_>
+ 10 8 6 9 -1.
+ <_>
+ 12 8 2 9 3.
+ <_>
+
+ <_>
+ 3 18 10 6 -1.
+ <_>
+ 3 20 10 2 3.
+ <_>
+
+ <_>
+ 4 14 20 3 -1.
+ <_>
+ 4 15 20 1 3.
+ <_>
+
+ <_>
+ 2 15 9 6 -1.
+ <_>
+ 2 17 9 2 3.
+ <_>
+
+ <_>
+ 13 0 4 19 -1.
+ <_>
+ 13 0 2 19 2.
+ <_>
+
+ <_>
+ 7 0 4 19 -1.
+ <_>
+ 9 0 2 19 2.
+ <_>
+
+ <_>
+ 1 4 22 2 -1.
+ <_>
+ 1 5 22 1 2.
+ <_>
+
+ <_>
+ 0 0 9 6 -1.
+ <_>
+ 0 2 9 2 3.
+ <_>
+
+ <_>
+ 0 0 24 18 -1.
+ <_>
+ 0 9 24 9 2.
+ <_>
+
+ <_>
+ 3 2 16 8 -1.
+ <_>
+ 3 6 16 4 2.
+ <_>
+
+ <_>
+ 3 6 18 6 -1.
+ <_>
+ 3 8 18 2 3.
+ <_>
+
+ <_>
+ 3 1 6 10 -1.
+ <_>
+ 5 1 2 10 3.
+ <_>
+
+ <_>
+ 13 0 9 6 -1.
+ <_>
+ 16 0 3 6 3.
+ <_>
+
+ <_>
+ 2 0 9 6 -1.
+ <_>
+ 5 0 3 6 3.
+ <_>
+
+ <_>
+ 10 2 4 15 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 6 0 7 10 -1.
+ <_>
+ 6 5 7 5 2.
+ <_>
+
+ <_>
+ 2 2 20 4 -1.
+ <_>
+ 12 2 10 2 2.
+ <_>
+ 2 4 10 2 2.
+ <_>
+
+ <_>
+ 2 11 19 3 -1.
+ <_>
+ 2 12 19 1 3.
+ <_>
+
+ <_>
+ 10 8 6 9 -1.
+ <_>
+ 12 8 2 9 3.
+ <_>
+
+ <_>
+ 8 8 6 9 -1.
+ <_>
+ 10 8 2 9 3.
+ <_>
+
+ <_>
+ 13 8 4 9 -1.
+ <_>
+ 13 8 2 9 2.
+ <_>
+
+ <_>
+ 3 11 9 9 -1.
+ <_>
+ 6 11 3 9 3.
+ <_>
+
+ <_>
+ 3 9 18 5 -1.
+ <_>
+ 9 9 6 5 3.
+ <_>
+
+ <_>
+ 2 4 2 20 -1.
+ <_>
+ 2 14 2 10 2.
+ <_>
+
+ <_>
+ 14 17 8 6 -1.
+ <_>
+ 14 20 8 3 2.
+ <_>
+
+ <_>
+ 3 21 18 2 -1.
+ <_>
+ 3 22 18 1 2.
+ <_>
+
+ <_>
+ 5 4 15 6 -1.
+ <_>
+ 10 4 5 6 3.
+ <_>
+
+ <_>
+ 2 15 12 6 -1.
+ <_>
+ 2 17 12 2 3.
+ <_>
+
+ <_>
+ 17 8 6 9 -1.
+ <_>
+ 17 11 6 3 3.
+ <_>
+
+ <_>
+ 2 12 20 4 -1.
+ <_>
+ 2 12 10 2 2.
+ <_>
+ 12 14 10 2 2.
+ <_>
+
+ <_>
+ 0 17 24 6 -1.
+ <_>
+ 0 19 24 2 3.
+ <_>
+
+ <_>
+ 7 16 9 4 -1.
+ <_>
+ 7 18 9 2 2.
+ <_>
+
+ <_>
+ 15 1 4 22 -1.
+ <_>
+ 17 1 2 11 2.
+ <_>
+ 15 12 2 11 2.
+ <_>
+
+ <_>
+ 5 1 4 22 -1.
+ <_>
+ 5 1 2 11 2.
+ <_>
+ 7 12 2 11 2.
+ <_>
+
+ <_>
+ 11 13 8 9 -1.
+ <_>
+ 11 16 8 3 3.
+ <_>
+
+ <_>
+ 6 1 6 9 -1.
+ <_>
+ 8 1 2 9 3.
+ <_>
+
+ <_>
+ 11 4 3 18 -1.
+ <_>
+ 11 10 3 6 3.
+ <_>
+
+ <_>
+ 5 8 12 6 -1.
+ <_>
+ 5 8 6 3 2.
+ <_>
+ 11 11 6 3 2.
+ <_>
+
+ <_>
+ 15 7 5 8 -1.
+ <_>
+ 15 11 5 4 2.
+ <_>
+
+ <_>
+ 4 7 5 8 -1.
+ <_>
+ 4 11 5 4 2.
+ <_>
+
+ <_>
+ 12 6 6 12 -1.
+ <_>
+ 15 6 3 6 2.
+ <_>
+ 12 12 3 6 2.
+ <_>
+
+ <_>
+ 6 6 6 12 -1.
+ <_>
+ 6 6 3 6 2.
+ <_>
+ 9 12 3 6 2.
+ <_>
+
+ <_>
+ 5 9 14 8 -1.
+ <_>
+ 12 9 7 4 2.
+ <_>
+ 5 13 7 4 2.
+ <_>
+
+ <_>
+ 9 1 3 14 -1.
+ <_>
+ 9 8 3 7 2.
+ <_>
+
+ <_>
+ 12 6 6 12 -1.
+ <_>
+ 12 10 6 4 3.
+ <_>
+
+ <_>
+ 4 5 4 18 -1.
+ <_>
+ 4 5 2 9 2.
+ <_>
+ 6 14 2 9 2.
+ <_>
+
+ <_>
+ 4 6 16 18 -1.
+ <_>
+ 4 12 16 6 3.
+ <_>
+
+ <_>
+ 5 4 7 20 -1.
+ <_>
+ 5 14 7 10 2.
+ <_>
+
+ <_>
+ 14 8 8 12 -1.
+ <_>
+ 14 14 8 6 2.
+ <_>
+
+ <_>
+ 9 10 6 14 -1.
+ <_>
+ 9 10 3 7 2.
+ <_>
+ 12 17 3 7 2.
+ <_>
+
+ <_>
+ 9 5 9 6 -1.
+ <_>
+ 12 5 3 6 3.
+ <_>
+
+ <_>
+ 9 4 3 18 -1.
+ <_>
+ 10 4 1 18 3.
+ <_>
+
+ <_>
+ 1 4 22 14 -1.
+ <_>
+ 12 4 11 7 2.
+ <_>
+ 1 11 11 7 2.
+ <_>
+
+ <_>
+ 2 7 18 2 -1.
+ <_>
+ 2 8 18 1 2.
+ <_>
+
+ <_>
+ 12 6 6 12 -1.
+ <_>
+ 12 10 6 4 3.
+ <_>
+
+ <_>
+ 6 5 9 7 -1.
+ <_>
+ 9 5 3 7 3.
+ <_>
+
+ <_>
+ 12 7 4 12 -1.
+ <_>
+ 12 13 4 6 2.
+ <_>
+
+ <_>
+ 8 7 4 12 -1.
+ <_>
+ 8 13 4 6 2.
+ <_>
+
+ <_>
+ 7 2 10 22 -1.
+ <_>
+ 7 13 10 11 2.
+ <_>
+
+ <_>
+ 0 1 3 20 -1.
+ <_>
+ 1 1 1 20 3.
+ <_>
+
+ <_>
+ 4 13 18 4 -1.
+ <_>
+ 13 13 9 2 2.
+ <_>
+ 4 15 9 2 2.
+ <_>
+
+ <_>
+ 2 13 18 4 -1.
+ <_>
+ 2 13 9 2 2.
+ <_>
+ 11 15 9 2 2.
+ <_>
+
+ <_>
+ 15 15 9 6 -1.
+ <_>
+ 15 17 9 2 3.
+ <_>
+
+ <_>
+ 0 15 9 6 -1.
+ <_>
+ 0 17 9 2 3.
+ <_>
+
+ <_>
+ 6 0 18 24 -1.
+ <_>
+ 15 0 9 12 2.
+ <_>
+ 6 12 9 12 2.
+ <_>
+
+ <_>
+ 6 6 6 12 -1.
+ <_>
+ 6 10 6 4 3.
+ <_>
+
+ <_>
+ 8 7 10 4 -1.
+ <_>
+ 8 9 10 2 2.
+ <_>
+
+ <_>
+ 1 9 18 6 -1.
+ <_>
+ 1 9 9 3 2.
+ <_>
+ 10 12 9 3 2.
+ <_>
+
+ <_>
+ 6 6 18 3 -1.
+ <_>
+ 6 7 18 1 3.
+ <_>
+
+ <_>
+ 7 7 9 8 -1.
+ <_>
+ 10 7 3 8 3.
+ <_>
+
+ <_>
+ 10 12 6 12 -1.
+ <_>
+ 12 12 2 12 3.
+ <_>
+
+ <_>
+ 3 14 18 3 -1.
+ <_>
+ 3 15 18 1 3.
+ <_>
+
+ <_>
+ 15 17 9 7 -1.
+ <_>
+ 18 17 3 7 3.
+ <_>
+
+ <_>
+ 1 12 10 6 -1.
+ <_>
+ 1 14 10 2 3.
+ <_>
+
+ <_>
+ 15 17 9 7 -1.
+ <_>
+ 18 17 3 7 3.
+ <_>
+
+ <_>
+ 10 3 3 19 -1.
+ <_>
+ 11 3 1 19 3.
+ <_>
+
+ <_>
+ 15 17 9 7 -1.
+ <_>
+ 18 17 3 7 3.
+ <_>
+
+ <_>
+ 6 1 11 9 -1.
+ <_>
+ 6 4 11 3 3.
+ <_>
+
+ <_>
+ 15 17 9 7 -1.
+ <_>
+ 18 17 3 7 3.
+ <_>
+
+ <_>
+ 6 5 11 6 -1.
+ <_>
+ 6 8 11 3 2.
+ <_>
+
+ <_>
+ 16 7 8 5 -1.
+ <_>
+ 16 7 4 5 2.
+ <_>
+
+ <_>
+ 2 4 20 19 -1.
+ <_>
+ 12 4 10 19 2.
+ <_>
+
+ <_>
+ 2 1 21 6 -1.
+ <_>
+ 9 1 7 6 3.
+ <_>
+
+ <_>
+ 6 5 12 14 -1.
+ <_>
+ 6 5 6 7 2.
+ <_>
+ 12 12 6 7 2.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 11 0 2 9 3.
+ <_>
+
+ <_>
+ 2 11 8 5 -1.
+ <_>
+ 6 11 4 5 2.
+ <_>
+
+ <_>
+ 16 7 8 5 -1.
+ <_>
+ 16 7 4 5 2.
+ <_>
+
+ <_>
+ 0 7 8 5 -1.
+ <_>
+ 4 7 4 5 2.
+ <_>
+
+ <_>
+ 15 17 9 7 -1.
+ <_>
+ 18 17 3 7 3.
+ <_>
+
+ <_>
+ 8 6 8 10 -1.
+ <_>
+ 8 6 4 5 2.
+ <_>
+ 12 11 4 5 2.
+ <_>
+
+ <_>
+ 15 15 9 9 -1.
+ <_>
+ 18 15 3 9 3.
+ <_>
+
+ <_>
+ 0 15 9 9 -1.
+ <_>
+ 3 15 3 9 3.
+ <_>
+
+ <_>
+ 12 10 9 7 -1.
+ <_>
+ 15 10 3 7 3.
+ <_>
+
+ <_>
+ 3 10 9 7 -1.
+ <_>
+ 6 10 3 7 3.
+ <_>
+
+ <_>
+ 13 15 10 8 -1.
+ <_>
+ 18 15 5 4 2.
+ <_>
+ 13 19 5 4 2.
+ <_>
+
+ <_>
+ 0 1 6 12 -1.
+ <_>
+ 0 1 3 6 2.
+ <_>
+ 3 7 3 6 2.
+ <_>
+
+ <_>
+ 10 0 6 12 -1.
+ <_>
+ 13 0 3 6 2.
+ <_>
+ 10 6 3 6 2.
+ <_>
+
+ <_>
+ 7 0 10 12 -1.
+ <_>
+ 7 0 5 6 2.
+ <_>
+ 12 6 5 6 2.
+ <_>
+
+ <_>
+ 4 1 16 8 -1.
+ <_>
+ 4 1 8 8 2.
+ <_>
+
+ <_>
+ 0 21 19 3 -1.
+ <_>
+ 0 22 19 1 3.
+ <_>
+
+ <_>
+ 6 9 18 4 -1.
+ <_>
+ 15 9 9 2 2.
+ <_>
+ 6 11 9 2 2.
+ <_>
+
+ <_>
+ 3 4 9 6 -1.
+ <_>
+ 3 6 9 2 3.
+ <_>
+
+ <_>
+ 9 1 6 15 -1.
+ <_>
+ 9 6 6 5 3.
+ <_>
+
+ <_>
+ 5 9 6 6 -1.
+ <_>
+ 8 9 3 6 2.
+ <_>
+
+ <_>
+ 5 1 14 9 -1.
+ <_>
+ 5 4 14 3 3.
+ <_>
+
+ <_>
+ 3 0 8 20 -1.
+ <_>
+ 3 0 4 10 2.
+ <_>
+ 7 10 4 10 2.
+ <_>
+
+ <_>
+ 5 0 7 9 -1.
+ <_>
+ 5 3 7 3 3.
+ <_>
+
+ <_>
+ 6 6 12 5 -1.
+ <_>
+ 10 6 4 5 3.
+ <_>
+
+ <_>
+ 0 1 8 14 -1.
+ <_>
+ 4 1 4 14 2.
+ <_>
+
+ <_>
+ 2 12 22 4 -1.
+ <_>
+ 2 14 22 2 2.
+ <_>
+
+ <_>
+ 8 17 6 6 -1.
+ <_>
+ 8 20 6 3 2.
+ <_>
+
+ <_>
+ 18 1 6 7 -1.
+ <_>
+ 18 1 3 7 2.
+ <_>
+
+ <_>
+ 0 0 6 6 -1.
+ <_>
+ 3 0 3 6 2.
+ <_>
+
+ <_>
+ 4 6 17 18 -1.
+ <_>
+ 4 12 17 6 3.
+ <_>
+
+ <_>
+ 6 0 12 6 -1.
+ <_>
+ 6 0 6 3 2.
+ <_>
+ 12 3 6 3 2.
+ <_>
+
+ <_>
+ 4 7 18 4 -1.
+ <_>
+ 13 7 9 2 2.
+ <_>
+ 4 9 9 2 2.
+ <_>
+
+ <_>
+ 4 12 10 6 -1.
+ <_>
+ 4 14 10 2 3.
+ <_>
+
+ <_>
+ 7 9 10 12 -1.
+ <_>
+ 12 9 5 6 2.
+ <_>
+ 7 15 5 6 2.
+ <_>
+
+ <_>
+ 0 1 24 3 -1.
+ <_>
+ 8 1 8 3 3.
+ <_>
+
+ <_>
+ 13 11 6 6 -1.
+ <_>
+ 13 11 3 6 2.
+ <_>
+
+ <_>
+ 5 11 6 6 -1.
+ <_>
+ 8 11 3 6 2.
+ <_>
+
+ <_>
+ 3 10 19 3 -1.
+ <_>
+ 3 11 19 1 3.
+ <_>
+
+ <_>
+ 0 2 6 9 -1.
+ <_>
+ 0 5 6 3 3.
+ <_>
+
+ <_>
+ 14 16 10 6 -1.
+ <_>
+ 14 18 10 2 3.
+ <_>
+
+ <_>
+ 0 16 10 6 -1.
+ <_>
+ 0 18 10 2 3.
+ <_>
+
+ <_>
+ 14 13 9 6 -1.
+ <_>
+ 14 15 9 2 3.
+ <_>
+
+ <_>
+ 0 16 18 3 -1.
+ <_>
+ 0 17 18 1 3.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 0 18 9 6 -1.
+ <_>
+ 0 20 9 2 3.
+ <_>
+
+ <_>
+ 14 13 9 6 -1.
+ <_>
+ 14 15 9 2 3.
+ <_>
+
+ <_>
+ 6 2 6 9 -1.
+ <_>
+ 8 2 2 9 3.
+ <_>
+
+ <_>
+ 15 8 4 12 -1.
+ <_>
+ 15 8 2 12 2.
+ <_>
+
+ <_>
+ 8 13 8 8 -1.
+ <_>
+ 8 17 8 4 2.
+ <_>
+
+ <_>
+ 4 20 18 3 -1.
+ <_>
+ 10 20 6 3 3.
+ <_>
+
+ <_>
+ 5 8 4 12 -1.
+ <_>
+ 7 8 2 12 2.
+ <_>
+
+ <_>
+ 7 7 12 3 -1.
+ <_>
+ 7 7 6 3 2.
+ <_>
+
+ <_>
+ 10 6 4 9 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+
+ <_>
+ 5 20 18 3 -1.
+ <_>
+ 11 20 6 3 3.
+ <_>
+
+ <_>
+ 1 20 18 3 -1.
+ <_>
+ 7 20 6 3 3.
+ <_>
+
+ <_>
+ 18 1 6 20 -1.
+ <_>
+ 21 1 3 10 2.
+ <_>
+ 18 11 3 10 2.
+ <_>
+
+ <_>
+ 0 1 6 20 -1.
+ <_>
+ 0 1 3 10 2.
+ <_>
+ 3 11 3 10 2.
+ <_>
+
+ <_>
+ 13 3 4 18 -1.
+ <_>
+ 15 3 2 9 2.
+ <_>
+ 13 12 2 9 2.
+ <_>
+
+ <_>
+ 0 2 6 12 -1.
+ <_>
+ 0 6 6 4 3.
+ <_>
+
+ <_>
+ 12 9 12 6 -1.
+ <_>
+ 18 9 6 3 2.
+ <_>
+ 12 12 6 3 2.
+ <_>
+
+ <_>
+ 7 3 4 18 -1.
+ <_>
+ 7 3 2 9 2.
+ <_>
+ 9 12 2 9 2.
+ <_>
+
+ <_>
+ 14 0 6 9 -1.
+ <_>
+ 16 0 2 9 3.
+ <_>
+
+ <_>
+ 0 9 12 6 -1.
+ <_>
+ 0 9 6 3 2.
+ <_>
+ 6 12 6 3 2.
+ <_>
+
+ <_>
+ 14 4 8 20 -1.
+ <_>
+ 18 4 4 10 2.
+ <_>
+ 14 14 4 10 2.
+ <_>
+
+ <_>
+ 2 4 8 20 -1.
+ <_>
+ 2 4 4 10 2.
+ <_>
+ 6 14 4 10 2.
+ <_>
+
+ <_>
+ 14 13 9 6 -1.
+ <_>
+ 14 15 9 2 3.
+ <_>
+
+ <_>
+ 1 13 9 6 -1.
+ <_>
+ 1 15 9 2 3.
+ <_>
+
+ <_>
+ 3 15 18 3 -1.
+ <_>
+ 9 15 6 3 3.
+ <_>
+
+ <_>
+ 5 13 9 6 -1.
+ <_>
+ 5 15 9 2 3.
+ <_>
+
+ <_>
+ 5 0 18 3 -1.
+ <_>
+ 5 1 18 1 3.
+ <_>
+
+ <_>
+ 8 2 6 7 -1.
+ <_>
+ 11 2 3 7 2.
+ <_>
+
+ <_>
+ 9 1 9 6 -1.
+ <_>
+ 12 1 3 6 3.
+ <_>
+
+ <_>
+ 6 1 9 6 -1.
+ <_>
+ 9 1 3 6 3.
+ <_>
+
+ <_>
+ 5 6 14 6 -1.
+ <_>
+ 12 6 7 3 2.
+ <_>
+ 5 9 7 3 2.
+ <_>
+
+ <_>
+ 8 2 6 13 -1.
+ <_>
+ 10 2 2 13 3.
+ <_>
+
+ <_>
+ 6 11 12 6 -1.
+ <_>
+ 12 11 6 3 2.
+ <_>
+ 6 14 6 3 2.
+ <_>
+
+ <_>
+ 3 1 18 15 -1.
+ <_>
+ 9 1 6 15 3.
+ <_>
+
+ <_>
+ 13 0 6 7 -1.
+ <_>
+ 13 0 3 7 2.
+ <_>
+
+ <_>
+ 3 3 16 6 -1.
+ <_>
+ 3 6 16 3 2.
+ <_>
+
+ <_>
+ 12 1 3 12 -1.
+ <_>
+ 12 7 3 6 2.
+ <_>
+
+ <_>
+ 7 7 6 9 -1.
+ <_>
+ 9 7 2 9 3.
+ <_>
+
+ <_>
+ 13 0 4 24 -1.
+ <_>
+ 13 0 2 24 2.
+ <_>
+
+ <_>
+ 7 0 4 24 -1.
+ <_>
+ 9 0 2 24 2.
+ <_>
+
+ <_>
+ 11 9 5 12 -1.
+ <_>
+ 11 13 5 4 3.
+ <_>
+
+ <_>
+ 7 15 9 6 -1.
+ <_>
+ 7 17 9 2 3.
+ <_>
+
+ <_>
+ 5 7 18 6 -1.
+ <_>
+ 5 9 18 2 3.
+ <_>
+
+ <_>
+ 8 9 5 12 -1.
+ <_>
+ 8 13 5 4 3.
+ <_>
+
+ <_>
+ 4 17 17 6 -1.
+ <_>
+ 4 19 17 2 3.
+ <_>
+
+ <_>
+ 0 3 18 14 -1.
+ <_>
+ 0 3 9 7 2.
+ <_>
+ 9 10 9 7 2.
+ <_>
+
+ <_>
+ 0 1 24 2 -1.
+ <_>
+ 0 2 24 1 2.
+ <_>
+
+ <_>
+ 0 15 18 3 -1.
+ <_>
+ 0 16 18 1 3.
+ <_>
+
+ <_>
+ 9 0 6 9 -1.
+ <_>
+ 11 0 2 9 3.
+ <_>
+
+ <_>
+ 3 3 14 12 -1.
+ <_>
+ 3 9 14 6 2.
+ <_>
+
+ <_>
+ 12 1 3 12 -1.
+ <_>
+ 12 7 3 6 2.
+ <_>
+
+ <_>
+ 8 0 6 9 -1.
+ <_>
+ 10 0 2 9 3.
+ <_>
+
+ <_>
+ 10 6 6 10 -1.
+ <_>
+ 12 6 2 10 3.
+ <_>
+
+ <_>
+ 5 0 6 9 -1.
+ <_>
+ 7 0 2 9 3.
+ <_>
+
+ <_>
+ 2 0 21 7 -1.
+ <_>
+ 9 0 7 7 3.
+ <_>
+
+ <_>
+ 6 11 12 5 -1.
+ <_>
+ 10 11 4 5 3.
+ <_>
+
+ <_>
+ 8 7 9 8 -1.
+ <_>
+ 11 7 3 8 3.
+ <_>
+
+ <_>
+ 9 6 6 18 -1.
+ <_>
+ 9 6 3 9 2.
+ <_>
+ 12 15 3 9 2.
+ <_>
+
+ <_>
+ 15 14 8 10 -1.
+ <_>
+ 19 14 4 5 2.
+ <_>
+ 15 19 4 5 2.
+ <_>
+
+ <_>
+ 1 14 8 10 -1.
+ <_>
+ 1 14 4 5 2.
+ <_>
+ 5 19 4 5 2.
+ <_>
+
+ <_>
+ 11 0 8 10 -1.
+ <_>
+ 15 0 4 5 2.
+ <_>
+ 11 5 4 5 2.
+ <_>
+
+ <_>
+ 5 0 8 10 -1.
+ <_>
+ 5 0 4 5 2.
+ <_>
+ 9 5 4 5 2.
+ <_>
+
+ <_>
+ 6 1 12 5 -1.
+ <_>
+ 6 1 6 5 2.
+ <_>
+
+ <_>
+ 1 12 18 2 -1.
+ <_>
+ 10 12 9 2 2.
+ <_>
+
+ <_>
+ 2 8 20 6 -1.
+ <_>
+ 12 8 10 3 2.
+ <_>
+ 2 11 10 3 2.
+ <_>
+
+ <_>
+ 7 6 9 7 -1.
+ <_>
+ 10 6 3 7 3.
+ <_>
+
+ <_>
+ 10 5 8 16 -1.
+ <_>
+ 14 5 4 8 2.
+ <_>
+ 10 13 4 8 2.
+ <_>
+
+ <_>
+ 3 9 16 8 -1.
+ <_>
+ 3 9 8 4 2.
+ <_>
+ 11 13 8 4 2.
+ <_>
+
+ <_>
+ 7 8 10 4 -1.
+ <_>
+ 7 8 5 4 2.
+ <_>
+
+ <_>
+ 7 12 10 8 -1.
+ <_>
+ 7 12 5 4 2.
+ <_>
+ 12 16 5 4 2.
+ <_>
+
+ <_>
+ 9 19 15 4 -1.
+ <_>
+ 14 19 5 4 3.
+ <_>
+
+ <_>
+ 1 0 18 9 -1.
+ <_>
+ 7 0 6 9 3.
+ <_>
+
+ <_>
+ 13 4 10 8 -1.
+ <_>
+ 18 4 5 4 2.
+ <_>
+ 13 8 5 4 2.
+ <_>
+
+ <_>
+ 3 16 18 4 -1.
+ <_>
+ 9 16 6 4 3.
+ <_>
+
+ <_>
+ 8 7 10 12 -1.
+ <_>
+ 13 7 5 6 2.
+ <_>
+ 8 13 5 6 2.
+ <_>
+
+ <_>
+ 6 7 10 12 -1.
+ <_>
+ 6 7 5 6 2.
+ <_>
+ 11 13 5 6 2.
+ <_>
+
+ <_>
+ 4 6 18 7 -1.
+ <_>
+ 10 6 6 7 3.
+ <_>
+
+ <_>
+ 0 17 18 3 -1.
+ <_>
+ 0 18 18 1 3.
+ <_>
+
+ <_>
+ 3 17 18 3 -1.
+ <_>
+ 3 18 18 1 3.
+ <_>
+
+ <_>
+ 2 4 6 10 -1.
+ <_>
+ 4 4 2 10 3.
+ <_>
+
+ <_>
+ 16 0 8 24 -1.
+ <_>
+ 16 0 4 24 2.
+ <_>
+
+ <_>
+ 4 0 8 15 -1.
+ <_>
+ 8 0 4 15 2.
+ <_>
+
+ <_>
+ 16 0 8 24 -1.
+ <_>
+ 16 0 4 24 2.
+ <_>
+
+ <_>
+ 1 4 18 9 -1.
+ <_>
+ 7 4 6 9 3.
+ <_>
+
+ <_>
+ 15 12 9 6 -1.
+ <_>
+ 15 14 9 2 3.
+ <_>
+
+ <_>
+ 3 9 18 6 -1.
+ <_>
+ 3 9 9 3 2.
+ <_>
+ 12 12 9 3 2.
+ <_>
+
+ <_>
+ 18 5 6 9 -1.
+ <_>
+ 18 8 6 3 3.
+ <_>
+
+ <_>
+ 0 5 6 9 -1.
+ <_>
+ 0 8 6 3 3.
+ <_>
+
+ <_>
+ 4 7 18 4 -1.
+ <_>
+ 13 7 9 2 2.
+ <_>
+ 4 9 9 2 2.
+ <_>
+
+ <_>
+ 2 1 12 20 -1.
+ <_>
+ 2 1 6 10 2.
+ <_>
+ 8 11 6 10 2.
+ <_>
+
+ <_>
+ 17 0 6 23 -1.
+ <_>
+ 17 0 3 23 2.
+ <_>
+
+ <_>
+ 1 6 2 18 -1.
+ <_>
+ 1 15 2 9 2.
+ <_>
+
+ <_>
+ 8 8 10 6 -1.
+ <_>
+ 8 10 10 2 3.
+ <_>
+
+ <_>
+ 0 6 20 6 -1.
+ <_>
+ 0 6 10 3 2.
+ <_>
+ 10 9 10 3 2.
+ <_>
+
+ <_>
+ 11 12 12 5 -1.
+ <_>
+ 15 12 4 5 3.
+ <_>
+
+ <_>
+ 0 4 3 19 -1.
+ <_>
+ 1 4 1 19 3.
+ <_>
+
+ <_>
+ 19 1 3 18 -1.
+ <_>
+ 20 1 1 18 3.
+ <_>
+
+ <_>
+ 2 1 3 18 -1.
+ <_>
+ 3 1 1 18 3.
+ <_>
+
+ <_>
+ 3 10 18 3 -1.
+ <_>
+ 9 10 6 3 3.
+ <_>
+
+ <_>
+ 4 4 10 9 -1.
+ <_>
+ 9 4 5 9 2.
+ <_>
+
+ <_>
+ 7 13 14 7 -1.
+ <_>
+ 7 13 7 7 2.
+ <_>
+
+ <_>
+ 3 13 14 7 -1.
+ <_>
+ 10 13 7 7 2.
+ <_>
+
+ <_>
+ 8 15 9 6 -1.
+ <_>
+ 11 15 3 6 3.
+ <_>
+
+ <_>
+ 4 14 8 10 -1.
+ <_>
+ 4 14 4 5 2.
+ <_>
+ 8 19 4 5 2.
+ <_>
+
+ <_>
+ 10 14 4 10 -1.
+ <_>
+ 10 19 4 5 2.
+ <_>
+
+ <_>
+ 3 8 5 16 -1.
+ <_>
+ 3 16 5 8 2.
+ <_>
+
+ <_>
+ 15 10 9 6 -1.
+ <_>
+ 15 12 9 2 3.
+ <_>
+
+ <_>
+ 0 10 9 6 -1.
+ <_>
+ 0 12 9 2 3.
+ <_>
+
+ <_>
+ 6 7 12 9 -1.
+ <_>
+ 6 10 12 3 3.
+ <_>
+
+ <_>
+ 9 10 5 8 -1.
+ <_>
+ 9 14 5 4 2.
+ <_>
+
+ <_>
+ 12 1 3 12 -1.
+ <_>
+ 12 7 3 6 2.
+ <_>
+
+ <_>
+ 8 15 6 9 -1.
+ <_>
+ 10 15 2 9 3.
+ <_>
+
+ <_>
+ 16 6 7 6 -1.
+ <_>
+ 16 9 7 3 2.
+ <_>
+
+ <_>
+ 8 1 4 22 -1.
+ <_>
+ 10 1 2 22 2.
+ <_>
+
+ <_>
+ 6 6 14 3 -1.
+ <_>
+ 6 6 7 3 2.
+ <_>
+
+ <_>
+ 0 18 19 3 -1.
+ <_>
+ 0 19 19 1 3.
+ <_>
+
+ <_>
+ 17 0 6 24 -1.
+ <_>
+ 17 0 3 24 2.
+ <_>
+
+ <_>
+ 0 13 15 6 -1.
+ <_>
+ 5 13 5 6 3.
+ <_>
+
+ <_>
+ 9 6 10 14 -1.
+ <_>
+ 14 6 5 7 2.
+ <_>
+ 9 13 5 7 2.
+ <_>
+
+ <_>
+ 1 6 8 10 -1.
+ <_>
+ 1 6 4 5 2.
+ <_>
+ 5 11 4 5 2.
+ <_>
+
+ <_>
+ 7 6 12 5 -1.
+ <_>
+ 7 6 6 5 2.
+ <_>
+
+ <_>
+ 7 7 9 6 -1.
+ <_>
+ 10 7 3 6 3.
+ <_>
+
+ <_>
+ 7 8 14 14 -1.
+ <_>
+ 14 8 7 7 2.
+ <_>
+ 7 15 7 7 2.
+ <_>
+
+ <_>
+ 3 8 14 14 -1.
+ <_>
+ 3 8 7 7 2.
+ <_>
+ 10 15 7 7 2.
+ <_>
+
+ <_>
+ 9 8 13 4 -1.
+ <_>
+ 9 10 13 2 2.
+ <_>
+
+ <_>
+ 3 2 6 12 -1.
+ <_>
+ 3 2 3 6 2.
+ <_>
+ 6 8 3 6 2.
+ <_>
+
+ <_>
+ 6 10 17 6 -1.
+ <_>
+ 6 13 17 3 2.
+ <_>
+
+ <_>
+ 1 10 17 6 -1.
+ <_>
+ 1 13 17 3 2.
+ <_>
+
+ <_>
+ 16 7 8 9 -1.
+ <_>
+ 16 10 8 3 3.
+ <_>
+
+ <_>
+ 0 7 8 9 -1.
+ <_>
+ 0 10 8 3 3.
+ <_>
+
+ <_>
+ 0 9 24 10 -1.
+ <_>
+ 12 9 12 5 2.
+ <_>
+ 0 14 12 5 2.
+ <_>
+
+ <_>
+ 3 2 15 8 -1.
+ <_>
+ 8 2 5 8 3.
+ <_>
+
+ <_>
+ 4 2 18 8 -1.
+ <_>
+ 10 2 6 8 3.
+ <_>
+
+ <_>
+ 0 1 18 4 -1.
+ <_>
+ 0 1 9 2 2.
+ <_>
+ 9 3 9 2 2.
+ <_>
+
+ <_>
+ 20 2 3 18 -1.
+ <_>
+ 21 2 1 18 3.
+ <_>
+
+ <_>
+ 1 3 3 19 -1.
+ <_>
+ 2 3 1 19 3.
+ <_>
+
+ <_>
+ 18 8 6 16 -1.
+ <_>
+ 20 8 2 16 3.
+ <_>
+
+ <_>
+ 0 8 6 16 -1.
+ <_>
+ 2 8 2 16 3.
+ <_>
+
+ <_>
+ 8 18 11 6 -1.
+ <_>
+ 8 20 11 2 3.
+ <_>
+
+ <_>
+ 4 6 12 5 -1.
+ <_>
+ 8 6 4 5 3.
+ <_>
+
+ <_>
+ 7 6 12 5 -1.
+ <_>
+ 11 6 4 5 3.
+ <_>
+
+ <_>
+ 6 3 9 6 -1.
+ <_>
+ 9 3 3 6 3.
+ <_>
+
+ <_>
+ 7 6 12 5 -1.
+ <_>
+ 7 6 6 5 2.
+ <_>
+
+ <_>
+ 9 8 6 7 -1.
+ <_>
+ 12 8 3 7 2.
+ <_>
+
+ <_>
+ 8 2 9 6 -1.
+ <_>
+ 11 2 3 6 3.
+ <_>
+
+ <_>
+ 8 14 6 9 -1.
+ <_>
+ 8 17 6 3 3.
+ <_>
+
+ <_>
+ 8 2 9 6 -1.
+ <_>
+ 11 2 3 6 3.
+ <_>
+
+ <_>
+ 4 3 16 20 -1.
+ <_>
+ 4 3 8 10 2.
+ <_>
+ 12 13 8 10 2.
+ <_>
+
+ <_>
+ 7 6 10 12 -1.
+ <_>
+ 12 6 5 6 2.
+ <_>
+ 7 12 5 6 2.
+ <_>
+
+ <_>
+ 0 2 7 12 -1.
+ <_>
+ 0 6 7 4 3.
+ <_>
+
+ <_>
+ 12 17 11 6 -1.
+ <_>
+ 12 19 11 2 3.
+ <_>
+
+ <_>
+ 4 7 12 8 -1.
+ <_>
+ 4 7 6 4 2.
+ <_>
+ 10 11 6 4 2.
+ <_>
+
+ <_>
+ 8 11 8 10 -1.
+ <_>
+ 12 11 4 5 2.
+ <_>
+ 8 16 4 5 2.
+ <_>
+
+ <_>
+ 9 1 4 9 -1.
+ <_>
+ 11 1 2 9 2.
+ <_>
+
+ <_>
+ 14 0 3 22 -1.
+ <_>
+ 15 0 1 22 3.
+ <_>
+
+ <_>
+ 7 0 3 22 -1.
+ <_>
+ 8 0 1 22 3.
+ <_>
+
+ <_>
+ 4 7 18 4 -1.
+ <_>
+ 13 7 9 2 2.
+ <_>
+ 4 9 9 2 2.
+ <_>
+
+ <_>
+ 10 2 4 15 -1.
+ <_>
+ 10 7 4 5 3.
+ <_>
+
+ <_>
+ 12 1 3 12 -1.
+ <_>
+ 12 7 3 6 2.
+ <_>
+
+ <_>
+ 0 0 18 13 -1.
+ <_>
+ 9 0 9 13 2.
+ <_>
+
+ <_>
+ 16 0 3 24 -1.
+ <_>
+ 17 0 1 24 3.
+ <_>
+
+ <_>
+ 5 0 3 24 -1.
+ <_>
+ 6 0 1 24 3.
+ <_>
+
+ <_>
+ 10 15 5 8 -1.
+ <_>
+ 10 19 5 4 2.
+ <_>
+
+ <_>
+ 2 18 18 2 -1.
+ <_>
+ 2 19 18 1 2.
+ <_>
+
+ <_>
+ 2 8 20 3 -1.
+ <_>
+ 2 9 20 1 3.
+ <_>
+
+ <_>
+ 7 6 9 6 -1.
+ <_>
+ 7 8 9 2 3.
+ <_>
+
+ <_>
+ 3 2 19 10 -1.
+ <_>
+ 3 7 19 5 2.
+ <_>
+
+ <_>
+ 2 7 19 3 -1.
+ <_>
+ 2 8 19 1 3.
+ <_>
+
+ <_>
+ 15 6 9 4 -1.
+ <_>
+ 15 8 9 2 2.
+ <_>
+
+ <_>
+ 2 2 18 8 -1.
+ <_>
+ 8 2 6 8 3.
+ <_>
+
+ <_>
+ 10 9 14 4 -1.
+ <_>
+ 10 9 7 4 2.
+ <_>
+
+ <_>
+ 4 4 6 16 -1.
+ <_>
+ 7 4 3 16 2.
+ <_>
+
+ <_>
+ 15 8 9 16 -1.
+ <_>
+ 18 8 3 16 3.
+ <_>
+
+ <_>
+ 0 8 9 16 -1.
+ <_>
+ 3 8 3 16 3.
+ <_>
+
+ <_>
+ 18 0 6 14 -1.
+ <_>
+ 20 0 2 14 3.
+ <_>
+
+ <_>
+ 0 0 6 14 -1.
+ <_>
+ 2 0 2 14 3.
+ <_>
+
+ <_>
+ 15 0 6 22 -1.
+ <_>
+ 17 0 2 22 3.
+ <_>
+
+ <_>
+ 3 0 6 22 -1.
+ <_>
+ 5 0 2 22 3.
+ <_>
+
+ <_>
+ 12 2 12 20 -1.
+ <_>
+ 16 2 4 20 3.
+ <_>
+
+ <_>
+ 0 2 12 20 -1.
+ <_>
+ 4 2 4 20 3.
+ <_>
+
+ <_>
+ 11 6 4 9 -1.
+ <_>
+ 11 6 2 9 2.
+ <_>
+
+ <_>
+ 9 0 6 16 -1.
+ <_>
+ 12 0 3 16 2.
+ <_>
+
+ <_>
+ 12 1 3 12 -1.
+ <_>
+ 12 7 3 6 2.
+ <_>
+
+ <_>
+ 3 4 18 6 -1.
+ <_>
+ 3 4 9 3 2.
+ <_>
+ 12 7 9 3 2.
+ <_>
+
+ <_>
+ 5 5 16 8 -1.
+ <_>
+ 13 5 8 4 2.
+ <_>
+ 5 9 8 4 2.
+ <_>
+
+ <_>
+ 0 13 10 6 -1.
+ <_>
+ 0 15 10 2 3.
+ <_>
+
+ <_>
+ 8 14 9 6 -1.
+ <_>
+ 8 16 9 2 3.
+ <_>
+
+ <_>
+ 6 2 9 6 -1.
+ <_>
+ 9 2 3 6 3.
+ <_>
+
+ <_>
+ 14 1 10 8 -1.
+ <_>
+ 19 1 5 4 2.
+ <_>
+ 14 5 5 4 2.
+ <_>
+
+ <_>
+ 9 1 3 12 -1.
+ <_>
+ 9 7 3 6 2.
+ <_>
+
+ <_>
+ 6 4 12 9 -1.
+ <_>
+ 6 7 12 3 3.
+ <_>
+
+ <_>
+ 6 5 12 6 -1.
+ <_>
+ 10 5 4 6 3.
+ <_>
+
+ <_>
+ 1 1 8 5 -1.
+ <_>
+ 5 1 4 5 2.
+ <_>
+
+ <_>
+ 12 12 6 8 -1.
+ <_>
+ 12 16 6 4 2.
+ <_>
+
+ <_>
+ 3 12 12 6 -1.
+ <_>
+ 3 14 12 2 3.
+ <_>
+
+ <_>
+ 9 18 12 6 -1.
+ <_>
+ 15 18 6 3 2.
+ <_>
+ 9 21 6 3 2.
+ <_>
+
+ <_>
+ 4 13 6 6 -1.
+ <_>
+ 4 16 6 3 2.
+ <_>
+
+ <_>
+ 11 3 7 18 -1.
+ <_>
+ 11 12 7 9 2.
+ <_>
+
+ <_>
+ 3 9 18 3 -1.
+ <_>
+ 9 9 6 3 3.
+ <_>
+
+ <_>
+ 5 3 19 2 -1.
+ <_>
+ 5 4 19 1 2.
+ <_>
+
+ <_>
+ 4 2 12 6 -1.
+ <_>
+ 4 2 6 3 2.
+ <_>
+ 10 5 6 3 2.
+ <_>
+
+ <_>
+ 9 6 6 9 -1.
+ <_>
+ 11 6 2 9 3.
+ <_>
+
+ <_>
+ 8 6 6 9 -1.
+ <_>
+ 10 6 2 9 3.
+ <_>
+
+ <_>
+ 16 9 5 15 -1.
+ <_>
+ 16 14 5 5 3.
+ <_>
+
+ <_>
+ 3 9 5 15 -1.
+ <_>
+ 3 14 5 5 3.
+ <_>
+
+ <_>
+ 6 6 14 6 -1.
+ <_>
+ 13 6 7 3 2.
+ <_>
+ 6 9 7 3 2.
+ <_>
+
+ <_>
+ 8 6 3 14 -1.
+ <_>
+ 8 13 3 7 2.
+ <_>
+
+ <_>
+ 0 16 24 5 -1.
+ <_>
+ 8 16 8 5 3.
+ <_>
+
+ <_>
+ 0 20 20 3 -1.
+ <_>
+ 10 20 10 3 2.
+ <_>
+
+ <_>
+ 5 10 18 2 -1.
+ <_>
+ 5 11 18 1 2.
+ <_>
+
+ <_>
+ 0 6 6 10 -1.
+ <_>
+ 2 6 2 10 3.
+ <_>
+
+ <_>
+ 2 1 20 3 -1.
+ <_>
+ 2 2 20 1 3.
+ <_>
+
+ <_>
+ 9 13 6 11 -1.
+ <_>
+ 11 13 2 11 3.
+ <_>
+
+ <_>
+ 9 15 6 8 -1.
+ <_>
+ 9 19 6 4 2.
+ <_>
+
+ <_>
+ 9 12 6 9 -1.
+ <_>
+ 9 15 6 3 3.
+ <_>
+
+ <_>
+ 5 11 18 2 -1.
+ <_>
+ 5 12 18 1 2.
+ <_>
+
+ <_>
+ 2 6 15 6 -1.
+ <_>
+ 2 8 15 2 3.
+ <_>
+
+ <_>
+ 6 0 18 3 -1.
+ <_>
+ 6 1 18 1 3.
+ <_>
+
+ <_>
+ 5 0 3 18 -1.
+ <_>
+ 6 0 1 18 3.
+ <_>
+
+ <_>
+ 18 3 6 10 -1.
+ <_>
+ 20 3 2 10 3.
+ <_>
+
+ <_>
+ 0 3 6 10 -1.
+ <_>
+ 2 3 2 10 3.
+ <_>
+
+ <_>
+ 10 5 8 9 -1.
+ <_>
+ 10 5 4 9 2.
+ <_>
+
+ <_>
+ 6 5 8 9 -1.
+ <_>
+ 10 5 4 9 2.
+ <_>
+
+ <_>
+ 3 2 20 3 -1.
+ <_>
+ 3 3 20 1 3.
+ <_>
+
+ <_>
+ 5 2 13 4 -1.
+ <_>
+ 5 4 13 2 2.
+ <_>
+
+ <_>
+ 17 0 7 14 -1.
+ <_>
+ 17 7 7 7 2.
+ <_>
+
+ <_>
+ 0 0 7 14 -1.
+ <_>
+ 0 7 7 7 2.
+ <_>
+
+ <_>
+ 9 11 10 6 -1.
+ <_>
+ 9 11 5 6 2.
+ <_>
+
+ <_>
+ 5 11 10 6 -1.
+ <_>
+ 10 11 5 6 2.
+ <_>
+
+ <_>
+ 11 6 3 18 -1.
+ <_>
+ 11 12 3 6 3.
+ <_>
+
+ <_>
+ 0 16 18 3 -1.
+ <_>
+ 0 17 18 1 3.
+ <_>
+
+ <_>
+ 6 16 18 3 -1.
+ <_>
+ 6 17 18 1 3.
+ <_>
+
+ <_>
+ 4 6 9 10 -1.
+ <_>
+ 4 11 9 5 2.
+ <_>
+
+ <_>
+ 9 7 15 4 -1.
+ <_>
+ 9 9 15 2 2.
+ <_>
+
+ <_>
+ 5 6 12 6 -1.
+ <_>
+ 5 6 6 3 2.
+ <_>
+ 11 9 6 3 2.
+ <_>
+
+ <_>
+ 6 1 12 9 -1.
+ <_>
+ 6 4 12 3 3.
+ <_>
+
+ <_>
+ 7 9 6 12 -1.
+ <_>
+ 7 9 3 6 2.
+ <_>
+ 10 15 3 6 2.
+ <_>
+
+ <_>
+ 11 5 13 6 -1.
+ <_>
+ 11 7 13 2 3.
+ <_>
+
+ <_>
+ 1 11 22 13 -1.
+ <_>
+ 12 11 11 13 2.
+ <_>
+
+ <_>
+ 18 8 6 6 -1.
+ <_>
+ 18 11 6 3 2.
+ <_>
+
+ <_>
+ 0 8 6 6 -1.
+ <_>
+ 0 11 6 3 2.
+ <_>
+
+ <_>
+ 0 6 24 3 -1.
+ <_>
+ 0 7 24 1 3.
+ <_>
+
+ <_>
+ 0 5 10 6 -1.
+ <_>
+ 0 7 10 2 3.
+ <_>
+
+ <_>
+ 6 7 18 3 -1.
+ <_>
+ 6 8 18 1 3.
+ <_>
+
+ <_>
+ 0 0 10 6 -1.
+ <_>
+ 0 2 10 2 3.
+ <_>
+
+ <_>
+ 19 0 3 19 -1.
+ <_>
+ 20 0 1 19 3.
+ <_>
+
+ <_>
+ 4 6 12 16 -1.
+ <_>
+ 4 6 6 8 2.
+ <_>
+ 10 14 6 8 2.
+ <_>
+
+ <_>
+ 19 6 4 18 -1.
+ <_>
+ 21 6 2 9 2.
+ <_>
+ 19 15 2 9 2.
+ <_>
+
+ <_>
+ 1 6 4 18 -1.
+ <_>
+ 1 6 2 9 2.
+ <_>
+ 3 15 2 9 2.
+ <_>
+
+ <_>
+ 3 21 18 3 -1.
+ <_>
+ 3 22 18 1 3.
+ <_>
+
+ <_>
+ 0 19 9 4 -1.
+ <_>
+ 0 21 9 2 2.
+ <_>
+
+ <_>
+ 12 18 12 6 -1.
+ <_>
+ 18 18 6 3 2.
+ <_>
+ 12 21 6 3 2.
+ <_>
+
+ <_>
+ 7 18 9 4 -1.
+ <_>
+ 7 20 9 2 2.
+ <_>
+
+ <_>
+ 12 16 10 8 -1.
+ <_>
+ 17 16 5 4 2.
+ <_>
+ 12 20 5 4 2.
+ <_>
+
+ <_>
+ 2 16 10 8 -1.
+ <_>
+ 2 16 5 4 2.
+ <_>
+ 7 20 5 4 2.
+ <_>
+
+ <_>
+ 14 0 10 12 -1.
+ <_>
+ 19 0 5 6 2.
+ <_>
+ 14 6 5 6 2.
+ <_>
+
+ <_>
+ 0 0 10 12 -1.
+ <_>
+ 0 0 5 6 2.
+ <_>
+ 5 6 5 6 2.
+ <_>
+
+ <_>
+ 15 14 9 6 -1.
+ <_>
+ 15 16 9 2 3.
+ <_>
+
+ <_>
+ 0 14 9 6 -1.
+ <_>
+ 0 16 9 2 3.
+ <_>
+
+ <_>
+ 14 14 10 6 -1.
+ <_>
+ 14 16 10 2 3.
+ <_>
+
+ <_>
+ 0 14 10 6 -1.
+ <_>
+ 0 16 10 2 3.
+ <_>
+
+ <_>
+ 5 18 18 2 -1.
+ <_>
+ 5 19 18 1 2.
+ <_>
+
+ <_>
+ 0 18 18 3 -1.
+ <_>
+ 0 19 18 1 3.
+ <_>
+
+ <_>
+ 3 5 18 12 -1.
+ <_>
+ 12 5 9 6 2.
+ <_>
+ 3 11 9 6 2.
+ <_>
+
+ <_>
+ 5 3 7 9 -1.
+ <_>
+ 5 6 7 3 3.
+ <_>
+
+ <_>
+ 4 0 19 15 -1.
+ <_>
+ 4 5 19 5 3.
+ <_>
+
+ <_>
+ 3 0 16 4 -1.
+ <_>
+ 3 2 16 2 2.
+ <_>
+
+ <_>
+ 4 12 16 12 -1.
+ <_>
+ 4 12 8 12 2.
+ <_>
+
+ <_>
+ 4 3 12 15 -1.
+ <_>
+ 10 3 6 15 2.
+ <_>
+
+ <_>
+ 16 4 2 19 -1.
+ <_>
+ 16 4 1 19 2.
+ <_>
+
+ <_>
+ 6 4 2 19 -1.
+ <_>
+ 7 4 1 19 2.
+ <_>
+
+ <_>
+ 13 14 8 10 -1.
+ <_>
+ 17 14 4 5 2.
+ <_>
+ 13 19 4 5 2.
+ <_>
+
+ <_>
+ 3 14 8 10 -1.
+ <_>
+ 3 14 4 5 2.
+ <_>
+ 7 19 4 5 2.
+ <_>
+
+ <_>
+ 12 6 3 18 -1.
+ <_>
+ 12 12 3 6 3.
+ <_>
+
+ <_>
+ 5 11 12 6 -1.
+ <_>
+ 5 11 6 3 2.
+ <_>
+ 11 14 6 3 2.
+ <_>
+
+ <_>
+ 10 5 8 10 -1.
+ <_>
+ 14 5 4 5 2.
+ <_>
+ 10 10 4 5 2.
+ <_>
+
+ <_>
+ 6 4 12 10 -1.
+ <_>
+ 6 4 6 5 2.
+ <_>
+ 12 9 6 5 2.
+ <_>
+
+ <_>
+ 6 8 18 10 -1.
+ <_>
+ 15 8 9 5 2.
+ <_>
+ 6 13 9 5 2.
+ <_>
+
+ <_>
+ 0 8 18 10 -1.
+ <_>
+ 0 8 9 5 2.
+ <_>
+ 9 13 9 5 2.
+ <_>
+
+ <_>
+ 12 6 3 18 -1.
+ <_>
+ 12 12 3 6 3.
+ <_>
+
+ <_>
+ 0 14 18 3 -1.
+ <_>
+ 0 15 18 1 3.
+ <_>
+
+ <_>
+ 12 6 3 18 -1.
+ <_>
+ 12 12 3 6 3.
+ <_>
+
+ <_>
+ 9 6 3 18 -1.
+ <_>
+ 9 12 3 6 3.
+ <_>
+
+ <_>
+ 6 14 18 3 -1.
+ <_>
+ 6 15 18 1 3.
+ <_>
+
+ <_>
+ 0 5 18 3 -1.
+ <_>
+ 0 6 18 1 3.
+ <_>
+
+ <_>
+ 2 5 22 3 -1.
+ <_>
+ 2 6 22 1 3.
+ <_>
+
+ <_>
+ 0 0 21 10 -1.
+ <_>
+ 7 0 7 10 3.
+ <_>
+
+ <_>
+ 6 3 18 17 -1.
+ <_>
+ 12 3 6 17 3.
+ <_>
+
+ <_>
+ 0 3 18 17 -1.
+ <_>
+ 6 3 6 17 3.
+ <_>
+
+ <_>
+ 0 12 24 11 -1.
+ <_>
+ 8 12 8 11 3.
+ <_>
+
+ <_>
+ 4 10 16 6 -1.
+ <_>
+ 4 13 16 3 2.
+ <_>
+
+ <_>
+ 12 8 6 8 -1.
+ <_>
+ 12 12 6 4 2.
+ <_>
+
+ <_>
+ 6 14 8 7 -1.
+ <_>
+ 10 14 4 7 2.
+ <_>
+
+ <_>
+ 15 10 6 14 -1.
+ <_>
+ 18 10 3 7 2.
+ <_>
+ 15 17 3 7 2.
+ <_>
+
+ <_>
+ 3 10 6 14 -1.
+ <_>
+ 3 10 3 7 2.
+ <_>
+ 6 17 3 7 2.
+ <_>
+
+ <_>
+ 6 12 18 2 -1.
+ <_>
+ 6 13 18 1 2.
+ <_>
+
+ <_>
+ 5 8 10 6 -1.
+ <_>
+ 5 10 10 2 3.
+ <_>
+
+ <_>
+ 12 11 9 4 -1.
+ <_>
+ 12 13 9 2 2.
+ <_>
+
+ <_>
+ 0 11 9 6 -1.
+ <_>
+ 0 13 9 2 3.
+ <_>
+
+ <_>
+ 11 2 3 18 -1.
+ <_>
+ 12 2 1 18 3.
+ <_>
+
+ <_>
+ 10 2 3 18 -1.
+ <_>
+ 11 2 1 18 3.
+ <_>
+
+ <_>
+ 9 12 6 10 -1.
+ <_>
+ 11 12 2 10 3.
+ <_>
+
+ <_>
+ 1 10 6 9 -1.
+ <_>
+ 1 13 6 3 3.
+ <_>
+
+ <_>
+ 6 9 16 6 -1.
+ <_>
+ 14 9 8 3 2.
+ <_>
+ 6 12 8 3 2.
+ <_>
+
+ <_>
+ 1 8 9 6 -1.
+ <_>
+ 1 10 9 2 3.
+ <_>
+
+ <_>
+ 7 7 16 6 -1.
+ <_>
+ 7 9 16 2 3.
+ <_>
+
+ <_>
+ 0 0 18 3 -1.
+ <_>
+ 0 1 18 1 3.
+ <_>
+
+ <_>
+ 10 0 6 9 -1.
+ <_>
+ 12 0 2 9 3.
+ <_>
+
+ <_>
+ 9 5 6 6 -1.
+ <_>
+ 12 5 3 6 2.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+ 10 15 2 9 2.
+ <_>
+
+ <_>
+ 8 0 6 9 -1.
+ <_>
+ 10 0 2 9 3.
+ <_>
+
+ <_>
+ 9 1 6 9 -1.
+ <_>
+ 9 4 6 3 3.
+ <_>
+
+ <_>
+ 1 0 18 9 -1.
+ <_>
+ 1 3 18 3 3.
+ <_>
+
+ <_>
+ 0 3 24 3 -1.
+ <_>
+ 0 4 24 1 3.
+ <_>
+
+ <_>
+ 6 14 9 4 -1.
+ <_>
+ 6 16 9 2 2.
+ <_>
+
+ <_>
+ 8 9 8 10 -1.
+ <_>
+ 12 9 4 5 2.
+ <_>
+ 8 14 4 5 2.
+ <_>
+
+ <_>
+ 5 2 13 9 -1.
+ <_>
+ 5 5 13 3 3.
+ <_>
+
+ <_>
+ 4 4 16 9 -1.
+ <_>
+ 4 7 16 3 3.
+ <_>
+
+ <_>
+ 4 4 14 9 -1.
+ <_>
+ 4 7 14 3 3.
+ <_>
+
+ <_>
+ 8 5 9 6 -1.
+ <_>
+ 8 7 9 2 3.
+ <_>
+
+ <_>
+ 1 7 16 6 -1.
+ <_>
+ 1 9 16 2 3.
+ <_>
+
+ <_>
+ 10 5 13 9 -1.
+ <_>
+ 10 8 13 3 3.
+ <_>
+
+ <_>
+ 1 5 13 9 -1.
+ <_>
+ 1 8 13 3 3.
+ <_>
+
+ <_>
+ 0 4 24 6 -1.
+ <_>
+ 12 4 12 3 2.
+ <_>
+ 0 7 12 3 2.
+ <_>
+
+ <_>
+ 1 14 10 9 -1.
+ <_>
+ 1 17 10 3 3.
+ <_>
+
+ <_>
+ 5 17 18 3 -1.
+ <_>
+ 5 18 18 1 3.
+ <_>
+
+ <_>
+ 0 16 18 3 -1.
+ <_>
+ 0 17 18 1 3.
+ <_>
+
+ <_>
+ 9 17 9 6 -1.
+ <_>
+ 9 19 9 2 3.
+ <_>
+
+ <_>
+ 1 20 22 4 -1.
+ <_>
+ 1 20 11 2 2.
+ <_>
+ 12 22 11 2 2.
+ <_>
+
+ <_>
+ 8 14 8 6 -1.
+ <_>
+ 8 17 8 3 2.
+ <_>
+
+ <_>
+ 8 6 8 15 -1.
+ <_>
+ 8 11 8 5 3.
+ <_>
+
+ <_>
+ 5 4 18 3 -1.
+ <_>
+ 5 5 18 1 3.
+ <_>
+
+ <_>
+ 9 3 5 10 -1.
+ <_>
+ 9 8 5 5 2.
+ <_>
+
+ <_>
+ 6 8 12 3 -1.
+ <_>
+ 6 8 6 3 2.
+ <_>
+
+ <_>
+ 2 6 18 6 -1.
+ <_>
+ 2 6 9 3 2.
+ <_>
+ 11 9 9 3 2.
+ <_>
+
+ <_>
+ 10 6 4 18 -1.
+ <_>
+ 12 6 2 9 2.
+ <_>
+ 10 15 2 9 2.
+ <_>
+
+ <_>
+ 7 5 6 6 -1.
+ <_>
+ 10 5 3 6 2.
+ <_>
+
+ <_>
+ 14 5 2 18 -1.
+ <_>
+ 14 14 2 9 2.
+ <_>
+
+ <_>
+ 8 5 2 18 -1.
+ <_>
+ 8 14 2 9 2.
+ <_>
+
+ <_>
+ 9 2 10 6 -1.
+ <_>
+ 9 2 5 6 2.
+ <_>
+
+ <_>
+ 3 1 18 12 -1.
+ <_>
+ 12 1 9 12 2.
+ <_>
+
+ <_>
+ 5 2 17 22 -1.
+ <_>
+ 5 13 17 11 2.
+ <_>
+
+ <_>
+ 4 0 12 6 -1.
+ <_>
+ 4 2 12 2 3.
+ <_>
+
+ <_>
+ 6 9 16 6 -1.
+ <_>
+ 14 9 8 3 2.
+ <_>
+ 6 12 8 3 2.
+ <_>
+
+ <_>
+ 9 0 5 18 -1.
+ <_>
+ 9 9 5 9 2.
+ <_>
+
+ <_>
+ 12 0 6 9 -1.
+ <_>
+ 14 0 2 9 3.
+ <_>
+
+ <_>
+ 6 0 6 9 -1.
+ <_>
+ 8 0 2 9 3.
+ <_>
+
+ <_>
+ 9 1 6 12 -1.
+ <_>
+ 11 1 2 12 3.
+ <_>
+
+ <_>
+ 5 9 13 4 -1.
+ <_>
+ 5 11 13 2 2.
+ <_>
+
+ <_>
+ 5 8 19 3 -1.
+ <_>
+ 5 9 19 1 3.
+ <_>
+
+ <_>
+ 9 9 6 8 -1.
+ <_>
+ 9 13 6 4 2.
+ <_>
+
+ <_>
+ 11 9 4 15 -1.
+ <_>
+ 11 14 4 5 3.
+ <_>
+
+ <_>
+ 2 0 6 14 -1.
+ <_>
+ 2 0 3 7 2.
+ <_>
+ 5 7 3 7 2.
+ <_>
+
+ <_>
+ 15 1 6 14 -1.
+ <_>
+ 18 1 3 7 2.
+ <_>
+ 15 8 3 7 2.
+ <_>
+
+ <_>
+ 3 1 6 14 -1.
+ <_>
+ 3 1 3 7 2.
+ <_>
+ 6 8 3 7 2.
+ <_>
+
+ <_>
+ 3 20 18 4 -1.
+ <_>
+ 12 20 9 2 2.
+ <_>
+ 3 22 9 2 2.
+ <_>
+
+ <_>
+ 5 0 4 20 -1.
+ <_>
+ 5 0 2 10 2.
+ <_>
+ 7 10 2 10 2.
+ <_>
+
+ <_>
+ 16 8 8 12 -1.
+ <_>
+ 20 8 4 6 2.
+ <_>
+ 16 14 4 6 2.
+ <_>
+
+ <_>
+ 0 8 8 12 -1.
+ <_>
+ 0 8 4 6 2.
+ <_>
+ 4 14 4 6 2.
+ <_>
+
+ <_>
+ 13 13 10 8 -1.
+ <_>
+ 18 13 5 4 2.
+ <_>
+ 13 17 5 4 2.
+ <_>
+
+ <_>
+ 1 13 10 8 -1.
+ <_>
+ 1 13 5 4 2.
+ <_>
+ 6 17 5 4 2.
+ <_>
+
+ <_>
+ 15 8 4 15 -1.
+ <_>
+ 15 13 4 5 3.
+ <_>
+
+ <_>
+ 5 8 4 15 -1.
+ <_>
+ 5 13 4 5 3.
+ <_>
+
+ <_>
+ 6 11 16 12 -1.
+ <_>
+ 6 15 16 4 3.
+ <_>
+
+ <_>
+ 2 11 16 12 -1.
+ <_>
+ 2 15 16 4 3.
+ <_>
+
+ <_>
+ 14 12 7 9 -1.
+ <_>
+ 14 15 7 3 3.
+ <_>
+
+ <_>
+ 10 1 3 21 -1.
+ <_>
+ 10 8 3 7 3.
+ <_>
+
+ <_>
+ 13 11 9 4 -1.
+ <_>
+ 13 13 9 2 2.
+ <_>
+
+ <_>
+ 3 10 17 9 -1.
+ <_>
+ 3 13 17 3 3.
+ <_>
+
+ <_>
+ 13 8 8 15 -1.
+ <_>
+ 13 13 8 5 3.
+ <_>
+
+ <_>
+ 3 8 8 15 -1.
+ <_>
+ 3 13 8 5 3.
+ <_>
+
+ <_>
+ 11 14 10 8 -1.
+ <_>
+ 16 14 5 4 2.
+ <_>
+ 11 18 5 4 2.
+ <_>
+
+ <_>
+ 0 18 22 6 -1.
+ <_>
+ 0 18 11 3 2.
+ <_>
+ 11 21 11 3 2.
+ <_>
+
+ <_>
+ 0 16 24 4 -1.
+ <_>
+ 0 16 12 4 2.
+ <_>
+
+ <_>
+ 6 20 12 3 -1.
+ <_>
+ 12 20 6 3 2.
+ <_>
+
+ <_>
+ 18 12 6 12 -1.
+ <_>
+ 21 12 3 6 2.
+ <_>
+ 18 18 3 6 2.
+ <_>
+
+ <_>
+ 0 12 6 12 -1.
+ <_>
+ 0 12 3 6 2.
+ <_>
+ 3 18 3 6 2.
+ <_>
+
+ <_>
+ 15 17 9 6 -1.
+ <_>
+ 15 19 9 2 3.
+ <_>
+
+ <_>
+ 1 6 22 10 -1.
+ <_>
+ 1 6 11 5 2.
+ <_>
+ 12 11 11 5 2.
+ <_>
+
+ <_>
+ 15 17 9 6 -1.
+ <_>
+ 15 19 9 2 3.
+ <_>
+
+ <_>
+ 0 18 18 2 -1.
+ <_>
+ 0 19 18 1 2.
+ <_>
+
+ <_>
+ 3 15 19 3 -1.
+ <_>
+ 3 16 19 1 3.
+ <_>
+
+ <_>
+ 0 13 18 3 -1.
+ <_>
+ 0 14 18 1 3.
+ <_>
+
+ <_>
+ 15 17 9 6 -1.
+ <_>
+ 15 19 9 2 3.
+ <_>
+
+ <_>
+ 0 17 9 6 -1.
+ <_>
+ 0 19 9 2 3.
+ <_>
+
+ <_>
+ 12 17 9 6 -1.
+ <_>
+ 12 19 9 2 3.
+ <_>
+
+ <_>
+ 3 17 9 6 -1.
+ <_>
+ 3 19 9 2 3.
+ <_>
+
+ <_>
+ 16 2 3 20 -1.
+ <_>
+ 17 2 1 20 3.
+ <_>
+
+ <_>
+ 0 13 24 8 -1.
+ <_>
+ 0 17 24 4 2.
+ <_>
+
+ <_>
+ 9 1 6 22 -1.
+ <_>
+ 12 1 3 11 2.
+ <_>
+ 9 12 3 11 2.
+
diff --git a/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/inference_demo.py b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/inference_demo.py
new file mode 100644
index 0000000000..c4e00493d0
--- /dev/null
+++ b/projects/python/perception/facial_expression_recognition/image_based_facial_emotion_estimation/inference_demo.py
@@ -0,0 +1,274 @@
+"""
+Demo script of the image-based facial emotion/expression estimation framework.
+
+It has three main features:
+Image: recognizes facial expressions in images.
+Video: recognizes facial expressions in videos in a frame-based approach.
+Webcam: connects to a webcam and recognizes facial expressions of the closest face detected
+by a face detection algorithm.
+
+Adopted from:
+https://github.com/siqueira-hc/Efficient-Facial-Feature-Learning-with-Wide-Ensemble-based-Convolutional-Neural-Networks
+"""
+
+# Standard Libraries
+import argparse
+from argparse import RawTextHelpFormatter
+import numpy as np
+from torchvision import transforms
+import PIL
+import cv2
+
+# OpenDR Modules
+from opendr.perception.facial_expression_recognition import FacialEmotionLearner, image_processing
+
+INPUT_IMAGE_SIZE = (96, 96)
+INPUT_IMAGE_NORMALIZATION_MEAN = [0.0, 0.0, 0.0]
+INPUT_IMAGE_NORMALIZATION_STD = [1.0, 1.0, 1.0]
+
+
+def is_none(x):
+ """
+ Verifies is the string 'x' is none.
+ :param x: (string)
+ :return: (bool)
+ """
+ if (x is None) or ((type(x) == str) and (x.strip() == "")):
+ return True
+ else:
+ return False
+
+
+def detect_face(image):
+ """
+ Detects faces in an image.
+ :param image: (ndarray) Raw input image.
+ :return: (list) Tuples with coordinates of a detected face.
+ """
+
+ # Converts to greyscale
+ greyscale_image = image_processing.convert_bgr_to_grey(image)
+
+ # Runs haar cascade classifiers
+ _FACE_DETECTOR_HAAR_CASCADE = cv2.CascadeClassifier("./face_detector/frontal_face.xml")
+ faces = _FACE_DETECTOR_HAAR_CASCADE.detectMultiScale(greyscale_image, scaleFactor=1.2, minNeighbors=9,
+ minSize=(60, 60))
+ face_coordinates = [[[x, y], [x + w, y + h]] for (x, y, w, h) in faces] if not (faces is None) else []
+ face_coordinates = np.array(face_coordinates)
+
+ # Returns None if no face is detected
+ return face_coordinates[0] if (len(face_coordinates) > 0 and (np.sum(face_coordinates[0]) > 0)) else None
+
+
+def _pre_process_input_image(image):
+ """
+ Pre-processes an image for ESR-9.
+ :param image: (ndarray)
+ :return: (ndarray) image
+ """
+
+ image = image_processing.resize(image, INPUT_IMAGE_SIZE)
+ image = PIL.Image.fromarray(image)
+ image = transforms.Normalize(mean=INPUT_IMAGE_NORMALIZATION_MEAN,
+ std=INPUT_IMAGE_NORMALIZATION_STD)(transforms.ToTensor()(image)).unsqueeze(0)
+ return image.numpy()
+
+
+def _predict(learner, input_face):
+ """
+ Facial emotion/expression estimation. Classifies the pre-processed input image with FacialEmotionLearner.
+
+ :param input_face: (ndarray) input image.
+ :param device: runs the classification on CPU or GPU
+ :param ensemble_size: number of branches in the network
+ :return: Lists of emotions and affect values including the ensemble predictions based on plurality.
+ """
+
+ # Recognizes facial expression
+ emotion, affect = learner.infer(input_face)
+ # Converts from Tensor to ndarray
+ affect = np.array([a.cpu().detach().numpy() for a in affect])
+ to_return_affect = affect[0] # a numpy array of valence and arousal values
+ to_return_emotion = emotion[0] # the emotion class with confidence tensor
+
+ return to_return_emotion, to_return_affect
+
+
+def recognize_facial_expression(learner, image, display):
+ """
+ Detects a face in the input image.
+ If more than one face is detected, the biggest one is used.
+ The detected face is fed to the _predict function which runs FacialEmotionLearner for facial emotion/expression
+ estimation.
+ :param image: (ndarray) input image.
+ """
+
+ # Detect face
+ face_coordinates = detect_face(image)
+
+ if face_coordinates is None:
+ print("No face detected.")
+ else:
+ face = image[face_coordinates[0][1]:face_coordinates[1][1], face_coordinates[0][0]:face_coordinates[1][0], :]
+ # Pre_process detected face
+ input_face = _pre_process_input_image(face)
+ # Recognize facial expression
+ emotion, affect = _predict(learner, input_face=input_face)
+
+ # display
+ if display:
+ image = cv2.putText(image, "Valence: %.2f" % affect[0], (10, 40 + 0 * 30), cv2.FONT_HERSHEY_SIMPLEX,
+ 1, (0, 255, 255), 2, )
+ image = cv2.putText(image, "Arousal: %.2f" % affect[1], (10, 40 + 1 * 30), cv2.FONT_HERSHEY_SIMPLEX,
+ 1, (0, 255, 255), 2, )
+ image = cv2.putText(image, emotion.description, (10, 40 + 2 * 30), cv2.FONT_HERSHEY_SIMPLEX,
+ 1, (0, 255, 255), 2, )
+ else:
+ print('emotion:', emotion)
+ print('valence, arousal:', affect)
+
+ return image
+
+
+def webcam(learner, camera_id, display, frames):
+ """
+ Receives images from a camera and recognizes
+ facial expressions of the closets face in a frame-based approach.
+ """
+
+ if not image_processing.initialize_video_capture(camera_id):
+ raise RuntimeError("Error on initializing video capture." +
+ "\nCheck whether a webcam is working or not.")
+
+ image_processing.set_fps(frames)
+
+ try:
+ # Loop to process each frame from a VideoCapture object.
+ while image_processing.is_video_capture_open():
+ # Get a frame
+ img, _ = image_processing.get_frame()
+ img = None if (img is None) else recognize_facial_expression(learner, img, display)
+ if display and img is not None:
+ cv2.imshow('Result', img)
+ cv2.waitKey(1)
+
+ except Exception as e:
+ print("Error raised during video mode.")
+ raise e
+ except KeyboardInterrupt:
+ print("Keyboard interrupt event raised.")
+ finally:
+ image_processing.release_video_capture()
+ if display:
+ cv2.destroyAllWindows()
+
+
+def image(learner, input_image_path, display):
+ """
+ Receives the full path to an image file and recognizes
+ facial expressions of the closets face in a frame-based approach.
+ """
+
+ img = image_processing.read(input_image_path)
+ img = recognize_facial_expression(learner, img, display)
+ if display:
+ cv2.imshow('Result', img)
+ cv2.waitKey(0)
+
+
+def video(learner, input_video_path, display, frames):
+ """
+ Receives the full path to a video file and recognizes
+ facial expressions of the closets face in a frame-based approach.
+ """
+
+ if not image_processing.initialize_video_capture(input_video_path):
+ raise RuntimeError("Error on initializing video capture." +
+ "\nCheck whether working versions of ffmpeg or gstreamer is installed." +
+ "\nSupported file format: MPEG-4 (*.mp4).")
+ image_processing.set_fps(frames)
+
+ try:
+ # Loop to process each frame from a VideoCapture object.
+ while image_processing.is_video_capture_open():
+ # Get a frame
+ img, timestamp = image_processing.get_frame()
+ # Video has been processed
+ if img is None:
+ break
+ else: # Process frame
+ img = None if (img is None) else recognize_facial_expression(learner, img, display)
+ if display and img is not None:
+ cv2.imshow('Result', img)
+ cv2.waitKey(33)
+
+ except Exception as e:
+ print("Error raised during video mode.")
+ raise e
+ finally:
+ image_processing.release_video_capture()
+ if display:
+ cv2.destroyAllWindows()
+
+
+def main():
+ # Parser
+ parser = argparse.ArgumentParser(description='test', formatter_class=RawTextHelpFormatter)
+ parser.add_argument("mode", help="select a method among 'image', 'video' or 'webcam' to run ESR-9.",
+ type=str, choices=["image", "video", "webcam"])
+ parser.add_argument("-d", "--display", help="display the output of ESR-9.",
+ action="store_true")
+ parser.add_argument("-i", "--input", help="define the full path to an image or video.",
+ type=str, default='')
+ parser.add_argument("-es", "--ensemble_size",
+ help="define the size of the ensemble, the number of branches in the model",
+ type=int, default=9)
+ parser.add_argument("--device", help="device to run on, either \'cpu\' or \'cuda\', defaults to \'cuda\'.",
+ default="cuda")
+ parser.add_argument("-w", "--webcam_id",
+ help="define the webcam by 'id' to capture images in the webcam mode." +
+ "If none is selected, the default camera by the OS is used.",
+ type=int, default=-1)
+ parser.add_argument("-f", "--frames", help="define frames of videos and webcam captures.",
+ type=int, default=5)
+
+ args = parser.parse_args()
+
+ learner = FacialEmotionLearner(device=args.device, ensemble_size=args.ensemble_size, dimensional_finetune=False,
+ categorical_train=False)
+ learner.init_model(num_branches=args.ensemble_size)
+ model_path = learner.download(mode="pretrained")
+ learner.load(args.ensemble_size, path_to_saved_network=model_path)
+
+ # Calls to main methods
+ if args.mode == "image":
+ try:
+ if is_none(args.input):
+ args.input = learner.download(mode="demo_image")
+ if is_none(args.input):
+ raise RuntimeError("Error: 'input' is not valid. The argument 'input' is a mandatory "
+ "field when image or video mode is chosen.")
+ image(learner, args.input, args.display)
+ except RuntimeError as e:
+ print(e)
+ elif args.mode == "video":
+ try:
+ if is_none(args.input):
+ args.input = learner.download(mode="demo_video")
+ if is_none(args.input):
+ raise RuntimeError("Error: 'input' is not valid. The argument 'input' is a mandatory "
+ "field when image or video mode is chosen.")
+ video(learner, args.input, args.display, args.frames)
+ except RuntimeError as e:
+ print(e)
+ elif args.mode == "webcam":
+ try:
+ webcam(learner, args.webcam_id, args.display, args.frames)
+ except RuntimeError as e:
+ print(e)
+
+
+if __name__ == "__main__":
+ print("Processing...")
+ main()
+ print("Process has finished!")
diff --git a/projects/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/README.md b/projects/python/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/README.md
similarity index 100%
rename from projects/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/README.md
rename to projects/python/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/README.md
diff --git a/projects/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/benchmark/benchmark_pstbln.py b/projects/python/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/benchmark/benchmark_pstbln.py
similarity index 100%
rename from projects/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/benchmark/benchmark_pstbln.py
rename to projects/python/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/benchmark/benchmark_pstbln.py
diff --git a/projects/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/demo.py b/projects/python/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/demo.py
similarity index 100%
rename from projects/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/demo.py
rename to projects/python/perception/facial_expression_recognition/landmark_based_facial_expression_recognition/demo.py
diff --git a/projects/perception/fall_detection/README.md b/projects/python/perception/fall_detection/README.md
similarity index 100%
rename from projects/perception/fall_detection/README.md
rename to projects/python/perception/fall_detection/README.md
diff --git a/projects/perception/fall_detection/demos/eval_demo.py b/projects/python/perception/fall_detection/demos/eval_demo.py
similarity index 100%
rename from projects/perception/fall_detection/demos/eval_demo.py
rename to projects/python/perception/fall_detection/demos/eval_demo.py
diff --git a/projects/perception/fall_detection/demos/inference_demo.py b/projects/python/perception/fall_detection/demos/inference_demo.py
similarity index 100%
rename from projects/perception/fall_detection/demos/inference_demo.py
rename to projects/python/perception/fall_detection/demos/inference_demo.py
diff --git a/projects/perception/fall_detection/demos/inference_tutorial.ipynb b/projects/python/perception/fall_detection/demos/inference_tutorial.ipynb
similarity index 100%
rename from projects/perception/fall_detection/demos/inference_tutorial.ipynb
rename to projects/python/perception/fall_detection/demos/inference_tutorial.ipynb
diff --git a/projects/perception/fall_detection/demos/webcam_demo.py b/projects/python/perception/fall_detection/demos/webcam_demo.py
similarity index 100%
rename from projects/perception/fall_detection/demos/webcam_demo.py
rename to projects/python/perception/fall_detection/demos/webcam_demo.py
diff --git a/projects/perception/heart_anomaly_detection/README.MD b/projects/python/perception/heart_anomaly_detection/README.MD
similarity index 100%
rename from projects/perception/heart_anomaly_detection/README.MD
rename to projects/python/perception/heart_anomaly_detection/README.MD
diff --git a/projects/perception/heart_anomaly_detection/demo.py b/projects/python/perception/heart_anomaly_detection/demo.py
similarity index 100%
rename from projects/perception/heart_anomaly_detection/demo.py
rename to projects/python/perception/heart_anomaly_detection/demo.py
diff --git a/projects/perception/multimodal_human_centric/audiovisual_emotion_recognition/README.MD b/projects/python/perception/multimodal_human_centric/audiovisual_emotion_recognition/README.MD
similarity index 100%
rename from projects/perception/multimodal_human_centric/audiovisual_emotion_recognition/README.MD
rename to projects/python/perception/multimodal_human_centric/audiovisual_emotion_recognition/README.MD
diff --git a/projects/perception/multimodal_human_centric/audiovisual_emotion_recognition/audiovisual_emotion_recognition_demo.py b/projects/python/perception/multimodal_human_centric/audiovisual_emotion_recognition/audiovisual_emotion_recognition_demo.py
similarity index 100%
rename from projects/perception/multimodal_human_centric/audiovisual_emotion_recognition/audiovisual_emotion_recognition_demo.py
rename to projects/python/perception/multimodal_human_centric/audiovisual_emotion_recognition/audiovisual_emotion_recognition_demo.py
diff --git a/projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/README.MD b/projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/README.MD
similarity index 100%
rename from projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/README.MD
rename to projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/README.MD
diff --git a/projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/gesture_recognition_demo.py b/projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/gesture_recognition_demo.py
similarity index 100%
rename from projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/gesture_recognition_demo.py
rename to projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/gesture_recognition_demo.py
diff --git a/projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_depth.png b/projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_depth.png
similarity index 100%
rename from projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_depth.png
rename to projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_depth.png
diff --git a/projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_rgb.png b/projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_rgb.png
similarity index 100%
rename from projects/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_rgb.png
rename to projects/python/perception/multimodal_human_centric/rgbd_hand_gesture_recognition/input_rgb.png
diff --git a/projects/perception/object_detection_2d/centernet/README.md b/projects/python/perception/object_detection_2d/centernet/README.md
similarity index 100%
rename from projects/perception/object_detection_2d/centernet/README.md
rename to projects/python/perception/object_detection_2d/centernet/README.md
diff --git a/projects/perception/object_detection_2d/centernet/eval_demo.py b/projects/python/perception/object_detection_2d/centernet/eval_demo.py
similarity index 100%
rename from projects/perception/object_detection_2d/centernet/eval_demo.py
rename to projects/python/perception/object_detection_2d/centernet/eval_demo.py
diff --git a/projects/perception/object_detection_2d/centernet/inference_demo.py b/projects/python/perception/object_detection_2d/centernet/inference_demo.py
similarity index 100%
rename from projects/perception/object_detection_2d/centernet/inference_demo.py
rename to projects/python/perception/object_detection_2d/centernet/inference_demo.py
diff --git a/projects/perception/object_detection_2d/centernet/inference_tutorial.ipynb b/projects/python/perception/object_detection_2d/centernet/inference_tutorial.ipynb
similarity index 100%
rename from projects/perception/object_detection_2d/centernet/inference_tutorial.ipynb
rename to projects/python/perception/object_detection_2d/centernet/inference_tutorial.ipynb
diff --git a/projects/perception/object_detection_2d/centernet/train_demo.py b/projects/python/perception/object_detection_2d/centernet/train_demo.py
similarity index 100%
rename from projects/perception/object_detection_2d/centernet/train_demo.py
rename to projects/python/perception/object_detection_2d/centernet/train_demo.py
diff --git a/projects/perception/object_detection_2d/detr/README.md b/projects/python/perception/object_detection_2d/detr/README.md
similarity index 100%
rename from projects/perception/object_detection_2d/detr/README.md
rename to projects/python/perception/object_detection_2d/detr/README.md
diff --git a/projects/perception/object_detection_2d/detr/eval_demo.py b/projects/python/perception/object_detection_2d/detr/eval_demo.py
similarity index 100%
rename from projects/perception/object_detection_2d/detr/eval_demo.py
rename to projects/python/perception/object_detection_2d/detr/eval_demo.py
diff --git a/projects/perception/object_detection_2d/detr/inference_demo.py b/projects/python/perception/object_detection_2d/detr/inference_demo.py
similarity index 100%
rename from projects/perception/object_detection_2d/detr/inference_demo.py
rename to projects/python/perception/object_detection_2d/detr/inference_demo.py
diff --git a/projects/perception/object_detection_2d/detr/inference_tutorial.ipynb b/projects/python/perception/object_detection_2d/detr/inference_tutorial.ipynb
similarity index 100%
rename from projects/perception/object_detection_2d/detr/inference_tutorial.ipynb
rename to projects/python/perception/object_detection_2d/detr/inference_tutorial.ipynb
diff --git a/projects/perception/object_detection_2d/detr/train_demo.py b/projects/python/perception/object_detection_2d/detr/train_demo.py
similarity index 100%
rename from projects/perception/object_detection_2d/detr/train_demo.py
rename to projects/python/perception/object_detection_2d/detr/train_demo.py
diff --git a/projects/perception/object_detection_2d/gem/README.md b/projects/python/perception/object_detection_2d/gem/README.md
similarity index 100%
rename from projects/perception/object_detection_2d/gem/README.md
rename to projects/python/perception/object_detection_2d/gem/README.md
diff --git a/projects/perception/object_detection_2d/gem/inference_demo.py b/projects/python/perception/object_detection_2d/gem/inference_demo.py
similarity index 100%
rename from projects/perception/object_detection_2d/gem/inference_demo.py
rename to projects/python/perception/object_detection_2d/gem/inference_demo.py
diff --git a/projects/perception/object_detection_2d/gem/inference_tutorial.ipynb b/projects/python/perception/object_detection_2d/gem/inference_tutorial.ipynb
similarity index 100%
rename from projects/perception/object_detection_2d/gem/inference_tutorial.ipynb
rename to projects/python/perception/object_detection_2d/gem/inference_tutorial.ipynb
diff --git a/projects/python/perception/object_detection_2d/nanodet/README.md b/projects/python/perception/object_detection_2d/nanodet/README.md
new file mode 100644
index 0000000000..92c456c235
--- /dev/null
+++ b/projects/python/perception/object_detection_2d/nanodet/README.md
@@ -0,0 +1,18 @@
+# NanoDet Demos
+
+This folder contains minimal code usage examples that showcase the basic functionality of the NanodetLearner
+provided by OpenDR. Specifically the following examples are provided:
+1. inference_demo.py: Perform inference on a single image in a directory. Setting `--device cpu` performs inference on CPU.
+2. eval_demo.py: Perform evaluation on the `COCO dataset`, implemented in OpenDR format. The user must first download
+ the dataset and provide the path to the dataset root via `--data-root /path/to/coco_dataset`.
+ Setting `--device cpu` performs evaluation on CPU.
+
+3. train_demo.py: Fit learner to dataset. PASCAL VOC and COCO datasets are supported via `ExternalDataset` class.
+ Provided is an example of training on `COCO dataset`. The user must set the dataset type using the `--dataset`
+ argument and provide the dataset root path with the `--data-root` argument. Setting the config file for the specific
+ model is done with `--model "wanted model name"`. Setting `--device cpu` performs training on CPU. Additional command
+ line arguments can be set to overwrite various training hyperparameters from the provided config file, and running
+ `python3 train_demo.py -h` prints information about them on stdout.
+
+ Example usage:
+ `python3 train_demo.py --model plus-m_416 --dataset coco --data-root /path/to/coco_dataset`
\ No newline at end of file
diff --git a/projects/python/perception/object_detection_2d/nanodet/eval_demo.py b/projects/python/perception/object_detection_2d/nanodet/eval_demo.py
new file mode 100644
index 0000000000..759c6aa4bd
--- /dev/null
+++ b/projects/python/perception/object_detection_2d/nanodet/eval_demo.py
@@ -0,0 +1,34 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+
+from opendr.perception.object_detection_2d import NanodetLearner
+from opendr.engine.datasets import ExternalDataset
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--data-root", help="Dataset root folder", type=str)
+ parser.add_argument("--model", help="Model that config file will be used", type=str)
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+
+ args = parser.parse_args()
+
+ val_dataset = ExternalDataset(args.data_root, 'coco')
+ nanodet = NanodetLearner(model_to_use=args.model, device=args.device)
+
+ nanodet.download("./predefined_examples", mode="pretrained")
+ nanodet.load("./predefined_examples/nanodet-{}/nanodet-{}.ckpt".format(args.model, args.model), verbose=True)
+ nanodet.eval(val_dataset)
diff --git a/projects/python/perception/object_detection_2d/nanodet/inference_demo.py b/projects/python/perception/object_detection_2d/nanodet/inference_demo.py
new file mode 100644
index 0000000000..71e95b15fb
--- /dev/null
+++ b/projects/python/perception/object_detection_2d/nanodet/inference_demo.py
@@ -0,0 +1,34 @@
+# Copyright 2020-2022 OpenDR European Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+from opendr.perception.object_detection_2d import NanodetLearner
+from opendr.engine.data import Image
+from opendr.perception.object_detection_2d import draw_bounding_boxes
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--device", help="Device to use (cpu, cuda)", type=str, default="cuda", choices=["cuda", "cpu"])
+ parser.add_argument("--model", help="Model that config file will be used", type=str, default='m')
+ args = parser.parse_args()
+
+ nanodet = NanodetLearner(model_to_use=args.model, device=args.device)
+ nanodet.download("./predefined_examples", mode="pretrained")
+ nanodet.load("./predefined_examples/nanodet_{}".format(args.model), verbose=True)
+ nanodet.download("./predefined_examples", mode="images")
+ img = Image.open("./predefined_examples/000000000036.jpg")
+ boxes = nanodet.infer(input=img)
+
+ draw_bounding_boxes(img.opencv(), boxes, class_names=nanodet.classes, show=True)
diff --git a/projects/python/perception/object_detection_2d/nanodet/inference_tutorial.ipynb b/projects/python/perception/object_detection_2d/nanodet/inference_tutorial.ipynb
new file mode 100644
index 0000000000..96af81257c
--- /dev/null
+++ b/projects/python/perception/object_detection_2d/nanodet/inference_tutorial.ipynb
@@ -0,0 +1,790 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f8b84e11-4e6b-40f6-807b-ec27281659e9",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "# Nanodet Tutorial\n",
+ "\n",
+ "This notebook provides a tutorial for running inference on a static image in order to detect objects.\n",
+ "The implementation of the [NanodetLearner](../../../../docs/reference/nanodet.md) is largely copied from the [Nanodet github](https://github.com/RangiLyu/nanodet).\n",
+ "More information on modifications and license can be found\n",
+ "[here](https://github.com/opendr-eu/opendr/blob/master/src/opendr/perception/object_detection_2d/nanodet/README.md)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b671ddd9-583b-418a-870e-69dd3c3db718",
+ "metadata": {},
+ "source": [
+ "First, we need to import the learner and initialize it:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "b6f3d99a-b702-472b-b8d0-95a551e7b9ba",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "/home/manos/new_opendr/opendr/venv/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
+ " from .autonotebook import tqdm as notebook_tqdm\n",
+ "/home/manos/new_opendr/opendr/venv/lib/python3.8/site-packages/gluoncv/__init__.py:40: UserWarning: Both `mxnet==1.8.0` and `torch==1.9.0+cu111` are installed. You might encounter increased GPU memory footprint if both framework are used at the same time.\n",
+ " warnings.warn(f'Both `mxnet=={mx.__version__}` and `torch=={torch.__version__}` are installed. '\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "model size is 1.5x\n",
+ "init weights...\n",
+ "Finish initialize NanoDet-Plus Head.\n"
+ ]
+ }
+ ],
+ "source": [
+ "from opendr.perception.object_detection_2d import NanodetLearner\n",
+ "\n",
+ "model=\"plus_m_1.5x_416\"\n",
+ "\n",
+ "nanodet = NanodetLearner(model_to_use=model, device=\"cuda\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4ef5ce70-8294-446a-8cc2-b3eba5e1037b",
+ "metadata": {},
+ "source": [
+ "Note that we can alter the device (e.g., 'cpu', 'cuda', etc.), on which the model runs, as well as the model from a variety of options included a custom you can make (\"EfficientNet_Lite0_320\", \"EfficientNet_Lite1_416\", \"EfficientNet_Lite2_512\",\n",
+ " \"RepVGG_A0_416\", \"t\", \"g\", \"m\", \"m_416\", \"m_0.5x\", \"m_1.5x\", \"m_1.5x_416\",\n",
+ " \"plus_m_320\", \"plus_m_1.5x_320\", \"plus_m_416\", \"plus_m_1.5x_416\", \"custom\")."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "10c74615-61ec-43ed-a1ae-57dceedfe938",
+ "metadata": {},
+ "source": [
+ "After creating our model, we need to download pre-trained weights."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "8a680c28-8f42-4b4a-8c6e-2580b7be2da5",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "save_path = \"./predefined_examples\"\n",
+ "nanodet.download(path=save_path, mode=\"pretrained\")\n",
+ "\n",
+ "load_model_weights=\"./predefined_examples/nanodet_{}\".format(model)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0e63e7a9-4310-4633-a2ac-052e94ad3ea0",
+ "metadata": {},
+ "source": [
+ "and load our weights:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "e12f582b-c001-4b9d-b396-4260e23139f6",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Model name: plus_m_1.5x_416 --> ./predefined_examples/nanodet_plus_m_1.5x_416/plus_m_1.5x_416.json\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "INFO:root:No param aux_fpn.reduce_layers.0.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.0.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.0.bn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.0.bn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.0.bn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.0.bn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.0.bn.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.0.bn.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.0.bn.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.0.bn.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.0.bn.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.0.bn.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.1.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.1.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.1.bn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.1.bn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.1.bn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.1.bn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.1.bn.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.1.bn.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.1.bn.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.1.bn.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.1.bn.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.1.bn.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.2.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.2.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.2.bn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.2.bn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.2.bn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.2.bn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.2.bn.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.2.bn.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.2.bn.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.2.bn.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.reduce_layers.2.bn.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.reduce_layers.2.bn.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.2.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.2.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.0.blocks.0.shortcut.3.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.2.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.2.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.top_down_blocks.1.blocks.0.shortcut.3.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.depthwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.depthwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.pointwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.pointwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.dwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.dwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.dwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.dwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.dwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.dwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.dwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.dwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.dwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.dwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.pwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.pwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.pwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.pwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.pwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.pwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.pwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.pwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.0.pwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.0.pwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.depthwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.depthwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.pointwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.pointwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.dwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.dwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.dwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.dwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.dwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.dwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.dwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.dwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.dwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.dwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.pwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.pwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.pwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.pwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.pwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.pwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.pwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.pwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.downsamples.1.pwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.downsamples.1.pwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.2.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.2.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.0.blocks.0.shortcut.3.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost1.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.primary_conv.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.ghost2.cheap_operation.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.0.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.0.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.1.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.2.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.2.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.bottom_up_blocks.1.blocks.0.shortcut.3.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.depthwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.depthwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.pointwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.pointwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.dwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.dwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.dwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.dwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.dwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.dwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.dwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.dwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.dwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.dwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.pwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.pwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.pwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.pwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.pwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.pwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.pwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.pwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_in_conv.0.pwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_in_conv.0.pwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.depthwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.depthwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.pointwise.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.pointwise.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.dwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.dwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.dwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.dwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.dwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.dwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.dwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.dwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.dwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.dwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.pwnorm.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.pwnorm.weight.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.pwnorm.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.pwnorm.bias.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.pwnorm.running_mean.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.pwnorm.running_mean.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.pwnorm.running_var.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.pwnorm.running_var.\u001b[0m\n",
+ "INFO:root:No param aux_fpn.extra_lvl_out_conv.0.pwnorm.num_batches_tracked.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_fpn.extra_lvl_out_conv.0.pwnorm.num_batches_tracked.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.0.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.0.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.0.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.0.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.0.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.0.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.1.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.1.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.1.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.1.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.1.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.1.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.2.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.2.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.2.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.2.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.2.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.2.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.3.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.3.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.3.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.3.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.cls_convs.3.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.cls_convs.3.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.0.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.0.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.0.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.0.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.0.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.0.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.1.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.1.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.1.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.1.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.1.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.1.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.2.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.2.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.2.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.2.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.2.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.2.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.3.conv.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.3.conv.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.3.gn.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.3.gn.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.reg_convs.3.gn.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.reg_convs.3.gn.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.gfl_cls.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.gfl_cls.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.gfl_cls.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.gfl_cls.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.gfl_reg.weight.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.gfl_reg.weight.\u001b[0m\n",
+ "INFO:root:No param aux_head.gfl_reg.bias.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.gfl_reg.bias.\u001b[0m\n",
+ "INFO:root:No param aux_head.scales.0.scale.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.scales.0.scale.\u001b[0m\n",
+ "INFO:root:No param aux_head.scales.1.scale.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.scales.1.scale.\u001b[0m\n",
+ "INFO:root:No param aux_head.scales.2.scale.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.scales.2.scale.\u001b[0m\n",
+ "INFO:root:No param aux_head.scales.3.scale.\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mNo param aux_head.scales.3.scale.\u001b[0m\n",
+ "INFO:root:Loaded model weight from ./predefined_examples/nanodet_plus_m_1.5x_416\n",
+ "\u001b[1m\u001b[35m[root]\u001b[0m\u001b[34m[09-01 18:10:13]\u001b[0m\u001b[32mINFO:\u001b[0m\u001b[37mLoaded model weight from ./predefined_examples/nanodet_plus_m_1.5x_416\u001b[0m\n"
+ ]
+ }
+ ],
+ "source": [
+ "nanodet.load(path=load_model_weights, verbose=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4e3ce347-391f-45a1-baf8-91d8a9ce04a7",
+ "metadata": {},
+ "source": [
+ "We will also download one sample image and load it, so we can use it in OpenDR for testing:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "9efba6eb-5235-4e31-a002-1bcb6e311704",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "nanodet.download(path=save_path, mode=\"images\")\n",
+ "\n",
+ "from opendr.engine.data import Image\n",
+ "image_path = \"./predefined_examples/000000000036.jpg\"\n",
+ "img = Image.open(image_path)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "9f083566-3d57-4db6-baa5-0fefdf8fa8ea",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 5,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAMsAAAD8CAYAAADZhFAmAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8qNh9FAAAACXBIWXMAAAsTAAALEwEAmpwYAAEAAElEQVR4nOz9Z6xlWXbfCf7W3vucc+993oSPyEgT6W1l+SKrKJJFSqIIseUIanpkBjOjLy1gHDAjzJf5KqCBAWYwGCOgGyN2qyVREtmkqCqaIllFsrxlmsrMykgXGd68ePaac/beaz6sfe6LLFZmFWjUMUAeMirjvXjm3nP22nut//r//0tUlfev96/3rx9+uf+pX8D71/vX/79c7wfL+9f71494vR8s71/vXz/i9X6wvH+9f/2I1/vB8v71/vUjXu8Hy/vX+9ePeP2lBIuI/DUReUVEzovIP/vL+B3vX+9f/7kv+Yvus4iIB74H/AxwEfg68PdV9bt/ob/o/ev96z/z9ZdxsnwEOK+qr6tqC/wb4Bf+En7P+9f713/WK/wl/MxTwNt3fHwR+Oh7fcNgeVmXjhyZfywi9t/ysaIIgpR/c3L4d9DylYqqfaTl75R/VVXyHR/PvwVQBRHK1ysiggI52w/LqvPvt98J4gTK73KCvZ7ymu88qeevQUDu+J39986/vvwMJ3e+a+b3QOlfx+HPVVXekRPoO9/3ne/t+++r3vH1/RdWRJy397XYVDgRbu9PyDHjvGNhWLGxuIgTd8czsStrZne8R8r58LV93+vR8nDe8b6/7z7NH0z5xDs+zeHrFYTvz4j0+75Hvu93AGj+vo9RuOO+CjAdj4lt+86HUK6/jGD5kS4R+SfAPwFY2Nzk7/zX/zVOIKktwKEPVN6TcqLLGYctSAc03rFU19TBzRdNcI4uKwddR8rQpURVBTIw6yLTlOmSPa0qeFLOAHQxA0LOGeTwIeQEk5iYxsQsJjTZom6Cp6k8qkrMMKg8o9ojAuM20nYJ7xziIKdMlxXnPEEcKgBKzkpMGRXwInjnCN4RRPDeHpoTR3AOsN+z13ZMJh1dhpSVnJQYlZwzKkLO9tD7oJovlDve0zxWy3rUbO8dzRwPOywMFMnCg0eW+ejZE/w/f+cbjG9P8aI8+ugx/umnP8HK6Ag5JyCXRZZREl999evcPNin6+x+zbpE22ViTHQx0cVM0vJ6s5KyotnuRcq2Ockdrz1rRlFyVMSV95ft+wVIOdlO0G9qOYHappqylkDAniuC5oSIkNU2xJwSqhlRSCmX3wff/I3Pvuua/csIlkvAmTs+Pl0+945LVf8F8C8Ajpx7QBXw3kPWcnrYDUg5I+IQgZgyIuAVYs6Q7MY23lM7h2pm5D3ZCyl4u7HOoy4zjQlHpgqBKgiqni4DGYIPzLpImxMxZergqYLgg6NJnmmXiCmhKLUPLNQVPjgOZi1kxffbt4otThFb7MGRY8I5QcRyXucCHRGnQi6Lm5xxTsjOkZISnODLaZMVZrEjxhLMGVJSNJcbKRaUdhrO1847TmcXPDllcllsqWzgwXlElJSgTcIoK2jm9mTG2miEDw7nhdRFdsYtu5Mxy8OymeRsiy4r3nuevf9DpBQtdHIipsg0dqSszLqWNiYmaUbXRtqUmcaOWdsxix1tTEzbjpgS01lHlxMpKzEl21hiIqVk7zPbexXV+UnWH/sxJgDc/KRQUky2SWV7zd5ePSD281SR8l8FxL97ZfKXESxfBx4UkfuwIPkl4H/2Xt/gRFisAk0IFvkI3lkguOxAbOd3IYDaYqacQpQ3mQHvBHGh7FaZDoeIMPAeapjmDGpflxOgGXGOREIEGh9wksquLvOUrw5+HqhV6ANTCU1FzJaGxKwkmOc9OWeCc/jg5qeXE0Fyst2/3/GVcvqlkioKbUzMRAg+E1NmPI1M20hSIXYJ57w9WCe4LKizHdMyGFvEKKgo4hyC3BFIJd0rp1xKtuC6BMSMVIH9WYd3wqgJTEKgG0+ZtImdyZhTZaFpSYudt3tc+xpfj2zxlTMsZdsoNGt5XSDiSoqWcXKYAivl1EGJ5R7lnIk50sZEl+zPLEXaLtKVz086C7bJbGb/3kWmbccsRrqYaduOlEsQxkgCUpeIXSTnbL9T7YQj2/16t+svPFhUNYrIPwV+G/DAf6uqL77X9whiN1XtiK3KQk2qVM5TeU/MEUWoJNhuKkLt7OPKOXuEJcF3KMF7glemyW5ApaCuHNsIs2zHsPeB2EbbwXLCe0cdLFjsDQnee9TbjqbZdrfgHOo9KopXR9RI0gwozjmc2ILKyRaqYgVEyhHnHd45HH3NIsSc7QSLma6zlKGvy1QhRktdcrZFmFRxTgjOkRSi7RwA5V7aDelPGO/tvStSvkzAKaL2cUtNzi1OIWaljYmVuuKWjC2Y2sjN/Ql2pHmc2EkevCcle70xTXHiyOWERSCVe5XVNgdxYvWbOtvkkh2R4hziSioqubwPZ5tk2SxVFBFLTb335JTKBqHz1xJTn27l8ppSOU2VLkdsM2oZzyJJk21MMTJLieks8uKv//q7rtO/lJpFVT8DfOZH/XoB3Lzihi7bg3diN0/Ebrgg1N7RpUQQR+M9Hj3cncoOZbWOoiq0KbHXRUpNbvWEKt4J5HIzUTuZvLfCsKTywTsklBqBsnAFomr5upIKZMULDEJFTInK2e6ZxRZuU1co0HYRRCxtKydcTNDGhCq0baKdRVv4YnWY6zeOlHHe246bLCBVLSVLKZNSLsEBsSspGxl1jlAKIe9cSf0s8L048EqSTNSAZsWJBd72rOPY8iJvXt3BIcRZ5Or2Lil1VFVFjKksZrvvbTzguTe/jQO8D7ZpOU8VKpwTahdAPFmhCjWVDzhxeBFC+XsIVdloPK48H4eQEERcWSN5Xl+CpaMignOBjD1zVUt3wWpC55VKhAG1BWzdsDa0n2VBb+vNO8/yoHnXdfo/WYF/5yUCA+9tt8iZaYw456i8wzl7kXYTwSP4UJW6VHHBoznPA8ayCzulskLlHANvu9g0RlLKBG8/KzuhwpEy2NlkKURGDWjIGUthS36LWA3UJZJYEe5E6USZtpk2Wu0BWDGrlGJSmXWRruTGOTLf+VJSYrIdfzqzE07KiaACmbITi6Vzin0eVZy3955zxntXUthDBJBSUyRsV9asiHi886W8st/rnUNdYJJqarF7d313j6Ori4Ag3tHNOq7vT5ilDu+z/QxLPAvAUjNuE/uzsW1+zqOa52iTqBBztGJe9bD2MHiRGDPeCZW31wdW44lzOOepgyd4A1K8eOqqxjtH7QODuib4QBV8+beKKlTkZJmDc4alOnF4H+aYmqXwqV+FfwpN/P7r7ggWhBAcPb7rnaMOgarshnam6Dy9cOUUEedAsVQgZzq1OkYKTisoToRRCHRZSc52ZF/SHsFg4UqssGtLXUK2oHElpbKdyxuogO3enWZmbTK0J1tgWqblmHaZmJLVVEBWoW2zBUZMpBRJZYGrKqmkiikZiiNlIYsKQixQ6TyDtEVYAhjstTtxSI+COZ3D3s4JMdviDCHYz51D7n3NIOQQ2JcVlnQPEcfVvX0e3lhDRAghEGeJW3sTZt2MUbVkwVeQJe+FnD23u2W+fXFGE5TKKbUXanEEDyIZLwGnGS92IqiU+jEbaqZkvHMl3Swbl/bghZJSmteo/f2ISckpU1UeL0Jd26lUe0cm2/FePl85hzhPJcKwrkg509SVrbfK0NdJO3nXdXp3BEvpQ+RSeFbi5whTShm8I0i5iRQIEcHnSHSCRmWaM7NsJ0uTHaEAAIlyFIuwUAWcc8ySFZ4OxeHIFeSYqdR29FnMdJpw4iHa75x2kTbp4emgzF9P5QOaEgllNjPkSrCUx1KkkioVuDapolnoCgyqCLlLlmSopVtkcNmCNWXIKSEF8QNL75A8v4e5gBWqUhZynzYKYgUMKSVC5axuKL2KnGzRO4FcXnMlcGV3nw+fPoZTW4zdbMb+tGNvesDG0lFyTogoObfl9dQ0zSrfvn7LTsRyGpbfhA8BUUunbRnbSRIceLE0FlECSu2wjzVTeUVzZFh50IQn07WdIYgKXewgCcMBLDSe6TjhvIEjqvZzU874qSfnRC4f9/fHTnhnrQRVdsfTd12nd0WwAPNcOZd8fpKSwYMKqe1Yqiu8E/s4KyrQxUhXCtJZTJZSCDTB4UtalrDejBdhEDwuZ7qYSECXM6ijS1aTJLXaoC2omWKplgrMorI37Zi2cd5EVBWcg5Q6K7Jjpou2MLUUn7n0EFK20yyX0yeLAo6shvH3KVPwVveollMtp/mDBex1ZSWTcV5Kv0StHik7MjCHmRxC0mwBUtI+QRGxmlCd/bdPVbI6lI7d/RmVF05sLPLWpEMnymySuLW/xz2bttn0J6H1jhL3rq/gHXRZcN5BFrKUzSBr2Qz6QPJIcuRorw3sXjkpOK9qCX77Db4gcKVFiy+njhcDh5gmfCcFfFEqsRPMoVSubEmS8WSCQOUy3jA4NGcqB2gmf19T+M7rrgmWrMI4RsYxkVWYtB3Be1uUGQ6iMgyl8DMAlEnMzLrEtIs48QSsV9HGhCv1R1JDxpwTaxCWIjBmJZZiN5edJnMI/4bgygOBLmXGbcdk1jGd2cJ2UtJH75hptr5AabzhBC8OLUV3ViWW06hfzP1OVjZ9vDeUJxUok7KjS0F7XFk4Tg6RuZzsNTPvRJfUSvteuMxrIICYkxXrJTidy/NGHdlStqQOdY42tmQJHDu6zNu395FxoG0T13b3rcEXGkRqnHbzlPjYYsOoqdidxr4Ut/RQFMXNU0OrD2yDlFKQi+YSKFo2Iz9POz2HGLMFtKfViBdHwtGp1WIkQXFoLKks9t48zFsSIoc9Iiv9CoKIrYVxeveQuEuCxR7gTGHc6Rwrz7mlrqzYY+poKsegsgBqoxL7RZpsF2kVFEfGehFSKv4qKeKg7eHacnrF0vgylMZOk6haivOMrwJ15YmzyKxTZq3SdXleEjoRQmB+Cqnaw9FYimqFlCNgu6aWNDFLaYZRHpwTC57SXBNk/hoLsms7bAa87YQu+L63Nk+pDukspfcUDUZVT/lCKcFrv9eVU1ecfX+XleRt4YJjazzl6NLiHHDIMXJ194CYIt7VBUQoAYyw2DQ8cnyRN25PSj2RSVhHPWZ7DymXQJjXZpYGIW7enZcSam5+H9QWfYGNDSlzh51456zJW9K5nO3UAbVA4xAttyUh83vUI2Gqti7eo76/O4Kly5nb045pNhh1POvKzRW6tt9dM9NO2KElOLHjUsuDBirX56AJQZi2EVWDf1ux5e3dYb0hqoi3tEvQebDk8mA0BAY1kJWDWcfBpLUivdxoESFT+hsFWtaC0OX5wsjz3avfUYHy5Kzuct42A4NtbTH0iz5rxoklIF4cSS21cyGgapwt0dIiFHeYOjohluLelr07TOsoiworeVRAe+qNZpx0djJ6z429A06tLuGDp2oq4iyxtT9j0o6pwsAQuOwBB6Ue+T/+xDN2X3IiqaW8MVtqO4uJWcxMYmK/bVER9mcdUZWDaWtNV4VJl0gI0zYx6SKzLjFLiVlS2sJkaKMzmko2aDnnXHo54J1aOSfQo1xSnpPzDs2Wvln5128gpUn6Huv0rgiWnJXbk66gIEKXhVnbkZLivcMHX47yktM6R1OCRMTNu71zMl3pyKoKrSbbkUSYdFYLhAIP55nxtmKyIjVHO1WqYA3NLiqT3HJrd8p4GhGsCYna7hhzJql1/HOyRmFfVOdshbohOFIa+2WXy2Vxlycjd8DBttXb53zfTRZb4OLL9xfY2QppxamlcbGkdk4cTvvTJ/dr5nAXxWo4VbWUrZQJjkQlBn87cVzb3eXx40epgqMNjnbacXu/ZdJNWC3QsGpfD9hrHVah/Dzrxmud5ycfpS4SzfONxZXTrieUSqkFRbSwJsQKc5RU2gGqBrS0sSublHDQtcy6XJ4JTGOmTRiNRmGWIvuzyDQaWDNLmVkXmbQWhLMu0aXM6+/Bw78rgiWpsjeJpXCXeec5aUaSImRC5ea4vCvQn6jtFClbTRCzzgl7/WLUkg6UUEMQZjGRk/18K+ns4E+5z6cV1yZC8IwnifE0lqTAfr93gko2ACIpKlqImSX/vyMNopwYiswL8pwN+fIFPIhlp5PS40D6PpFS2RkwJwAaK3qOSeMLb86IiPZpW5f2/u00YY74OO9xWjhV/cIou7AtxL55qdyetiwNrQ45CMDUOvs70ynHczK0S+7ci7Xs0HZv7TRj3gBzUnovzuo5A8xcobn06VePghYmtvNWG5Zg7xu0i7Ur96sHNLLx5nIk+HqOGh5+jaWM2t+gHg1US8i6aA3jv/ffLLzrOr0rggUsdZikTDuzws1yTMtlVaxTbbCnI5cFPwiehTowrAJtUm4eTImizJ9fyfd76nnsj+0stJ3xj1K52d55Yk+BSd5Qqax0uWD4mXla5ZzDBY9zdmqkwimyoLBCX0oH3U6Dw76G966kbbawREvdIIbkiHPz/zrNFoQ94lN21UL5Kz9W56eG9jtzATekP8GwfzOEquzWBRpwau8n5gQSMHDWnsf+LKLA6uKQ29tjWjpms8jN/QMePJpKemdfm3MujUil66aA4FxdTv50x6K1brw40GxESftA59Qe1TwHNA5rMZ1/7Nz3U/TteFQSTsL8+w1lvANG1x75sAUizuFSuddByuZy16NhhioNVAx+ba04yxm0ApydJsE5BsF2lC4lYwd7RyWA9wTv0Lbvc1int7+xRpo7PD1S34cpjb2UKRwnK34PphHvfQEPHFVtcgGQkvMWNq9hzkg+3CUpi9V5b6dE4TeJ2Ody6RHkUtC7fov3BjQ4J/O+SqnzgUPkRkSgvK95vBR0By3UndIZL+URKcY5DO6yIn29UxanL2lUxqPZ6paUlZ3plGPLQy4EBw7aNnJtdw8oCzqn+WtLyRay9wOk0EjyHazxw0VrR5n34Y4AOPx6792cb3b4vg9P1pwPuWN9ym20IAc4kFy+381h/qyHbANLS63/pNKz2zt7ge9R4d8VwdLvFsPG08ZMTBFUCF6om0AdoPaeEDyFD0hdBRzCrFDasyrDEEgDyNoa69fbA+8K92q+cFLpFMeIc37OvO1vvoEGyqRN88ZjSqkkCWrNU+whZQVfCT67eRpTVR5NliIWvvHhg895TutBe+C3MHV7ohXKcpqy54b02E1/uuTSY3Jg2GsJOMWatf3OmAvC05M1je0guKSoE5w1fOapKqWE6tSDxPkuf3P3gOMri1h8OuKsY2tvTJdmJSis2WmbUiHDviM9kxI4+Z1pE4cwuv09FTaABe0h/Z/5++6vd6Z+9twQayf0sLAPbl4voj2MnOcBZ6/Zzb9f1VPQkne97opgMc2GErxSBYeLgkdYHNUsDyoa78gos5QOxVJIKdAiHmtWxmx1RFMFgy1TKrtrqSXU9vySuRC88a16oVZfE/W7eRfTXCQmUoiVCqH0Jpwwf5gueERNdNY/15QyXoVIBnFFCWnpCv2D7RnJhY0r3lnnfuca7sj9aMwlGOYdFQ6bFFYIzxeP7xeLFc9auF99+6IgExZYWtI+kcOUBaEtYKuKNVyvHYw5tWzBgnekaeTG3oxpZ7SWvtnZp3u2wN08fdKebVxOhD5g+t+rpb48hHAL2sDhur2z/rCaqKg1lTmQgh72T6xHZahoKQHLqWYnXErpcMMqm4r31TtAmB903RXBoiiTmPClYF0aVAyqwLD2NMEz8H6ebyomtupiZm8WcV2mTXkOJ9viUKNVFNw8eutQx9boDqFypKgQAlmiUTr6Ard0w1UtWBSd70CqlvfnlKhLo7PFCJQpG9sYLbRzMd5Tykooik7EBFf9CdAvUoeBFCKKpszCeMKZ62/TbJzhTWBPArEwW6TQ1KXHifv8uyB8Asa7cgXxsht8mKbBXJat5XDJqvNF1RbNTBM8oFzfPeCJY0cMmRMhtpGDacf+bMzKcI7PkgvTYN48fceu3QcH83/rUylgnnId1gsONBVticwl3fba+2dRckgois/DE8eCz4LSeZn/u9qeZZILNZp/jKkEd1FbvkcedlcES68/TykhzqS6lbOUp0ux5AnYYhPHpIuMW5OvOqfzWrcvbh1CFYz51cVM5RyJjKsddWl+RZeZdZYy6HyHMXg2Fqo6WB2iQIoJUQekObrVJRNn5VTK5dAXsHJ4ErlCcdF5hjV/LySdQ8YB2JjOePL6Hhtv3GKnm/C/+oBnZzbl5e2bvJQy55sR15sR+6JE70pDT+ZgSH9S9EpTAw3Krq6HxMS56qpk8NaoBVA69XQxUyfTD+1OZywNaoaDimnd0k5aJrPE7fEBp9cEEY9z+o5dui/4gTtOmDuL9cOFfWdgzU8Kd1jbWV1WWAAlGA8rD0t1+5/TAwjzrrzrG51u/poMKRQQI8E68eXkyX8qvfv+664IFu8cy01taZQ4muBwmNjJcuo0h2YVYdpZRzgUfUdMhoD5sku6yjGqKrpcmMF9Z7gIpbTkxh7Bt7EUAJbn5oKkdF1nhbceImpJI5rEAhT7XTHnUqQbFb9fqFUdLA00IgWh8vNUKCWTu7oghKSc2rnNR6/s8NQOLNKwP1rlc/vX6V6fsDEI/Fja4EOuY3v/gG7rFvvH1nm7ajjvHRdUuJFgP2c6Eut1xThibIZ8CEP3geOLXGG+iO+gflha5625V+79dDojoywPG3arCeKEru24vntAOmHy6P7n9Tu2oVH+HfVJn0qp5nfIEOCO9Anm+pJUOGOHwVZ+jv2Qd6R7fT3q3lET9Tw546kdBqUFmYjJPwyw0TmSedenYSIwqD0pmTCqqkzxltV6GlKO4jYlVIVh7Rmq7SjTLpVOSa9FEKqSy3cx4pwwGtSklBjHVDr2Quk7W09ApBSIMKxrg4bLC9Oy+xtxsE8XHJX3zLr4jlPbSVFOeof3jq4rLIKC5okzxSNZWcuRs1du8MyVHR6c1iz4RfxCgMoTYiRMYHdnTL01oGsjbuBZZ4CrHflCx33NlJ8gk2phNqzYrSuuxcTNvTGvpAlXxDEJFV1VMw2O7CtaFDQUuNwQtR6Onuf6QErWB/GxI2pmZzxjY3HAJbOyMdXk3piYZtShoqfpOyd0XcT7wGS6XWTF3qj5/QkIpWazms4VJrUraZ4g5DjmYHyVqlpiMNgsdUbEeYOFe5HaIe/NPlYtiGTqg/FPF+xWm9nv6yXPcud9eI91encEC+bYosHktgNv6ISqzAVg5hACVZH9xqKAbHMmdj0d2xntu2hTrH7J1AGqusJ3jtksMZsluoKS9Z1y50renpXgIQxrpm2HlhzZO6EpNBMtxboPgsRyYkXD9L0D7gjUrFjTMmZql7lnZ4+HLt7gsa0ZJ9wig/oIBHChIhcOl8sVq9WIg9iy3jZopyTtkOAtVZllGEGOgiwIo1ZpmHLM20L8uFYcMGWax/j9TDcyjwCNgekgM2sqoxalwHgQmFaBaVXROmFQVay5inCwz7fPf4/81GNc39vj1PoSL7x1w5gCs8i1nTExdzQFUcuZwmh2qCa2D66xtXetbDwFHSuLed4rUpNce+8NGSyAQHAOzS1Jb4G7Vhqv9v11VRk5E1egdYd33tSZCMF5C7xSz7k5+GCbGfRlni/B5nBZ5ifdeyDHd0ewQDGC8LBQVQy8n58UitI4Tz2ygn2WMtOUGHfCuMvFC0rMvKEUiV1K5GxsYc1qELRz1E6I3pFyoo2ZLKZ7QQvBz1kRWgc7PUJBR7TURDkr3gV7XaW2TXUpNrWHS3MxaLAACwLLkwlnr9zgoWt7nJtVrI2WqRbXoKBiRgMxtWZO4JyyQM12N+FkGlkRHg2ByJLJLhOyp51GfHakoOAFXbLTbyEGRguLOHXE0OKqgE6t+WaNyoALoJXlL3m/iN40IZrQBC+/fp6NB9dYbJa5sLvPsdVVO42DJ7WRrf0p427GQnOY7tjWnmjjGESsd5Uz5dCwfkcXCyXfIkxFScl6Hrnc40mxQbKGZmc/f96Mtd+VkjVs+7NAysY0azsGtcmUTX7s5kiGat8Bu6Nno0oVKlADXyaz8buu0bsmWLwThsHTeF9MC+zmh0J3KBkAXvpueaT2wuKgNhZyF0vdYuKmaUyQhar2DKtAJULlHLW3NGhnYgtlUFvn3ajxgvdKVdKo0RyNsa78LGWK7AEJUDvbnbqYkWBNVAQqEVamU87c3ObMpS3OHignF9cYDk/CYs8xkfL/zux3SuItyeg6I6nZjhODjDOIL/l7y5wi4xpP9rbgnAhxlgnBW13WCjkqOYFuRVN/BiWPM7JnAqkedNCk5FmCGpwhK1zI8JGj93OkWeLopW1uLqwUXp6QxpmDScfu+IDNRePX5dQjh55AjfhV/rvn3ySqbVLeYW0BgdrbJuIKDy14Ry3gJZtZCZa6BmcsYu968w7MP6FsYoI3LwXUMgJxVKFC1aFOiiWVMdp9yQb6nlDSWNjSmVnbkTHmd8zpB65PuIuCxcwlIGFalIH3LPhC1sOOeFdIk5qVYRUYhMAmihfPzfHMRFYl72y8p6szTTGvi7m36MmEIHZ6YPLTUHnqENAsiMdUeHNBlBJcoMuZg2ImkcmmdxFHVye8eLTNLO1uc3LrNqdv7HF2omxWCywMjlBtVGjfOBNfGAB9gNruZ+4lFMcVWHINb812IBQGQrTdP9c9gRTSIMFUELXAIAodEfGCBMEFOewhzRRXFZi4Nf+0LtvXpi4TxKOtErvMdJC4eqThiFvE3YgcGS1x+1vnOZUy570QkylCt/bH3HfUoG9xHjSaDFgdg1Cx09Zc3TPKjHH+eupODyf7eTvALJu8MQLmcDjYv5pvgvW2fEGMde6B4AR8UVwGqezfvOlggoOU7L/BWWZQl39zRUDn8KXOFSbx3dfoXREsWWF31tqp4u2N9QvClX/3KLWz9CaIMvCBhRBQlHFMrA0q2pQZF/Zo5RxDZ7lw9460SFhowhz3aIJjVAUGVUDUdr3Z7pjR0rA8RKPhH7SRXHvaqEhwNKljsL/HmStbLF/fZmOn5WRbsTocMRqcILoOsFopOXAFlqagN9YvdSTJhNIw05QtmFxkwVfszCZIfWeDEVwCvEOnoFPBV26estjR5qgXAtklJLhy8ggMhdxlSCbK6g3mNFuwxRTRDKFx/MnkCk/de9qK7dag14dPH+dvvXWF/7R3nefIpDZxbXePlDp8VZmIq+gZnVMWmiH3rC9yee+2Fe5zlrj0gFRB0XrFZ1FMllNWc0ktVTD+nSGh1lS2gGsLgKA5I/EQklYxgqShmYfOPJQ6CQA1eyZXKEUp2r3f6+5yNAyKnt2b/BZMB49aUFin3PBwpxDEaC+CMomR/S6iYrBvyqZ2NG6VmdFF1eIEIjRNzTDn0kcR1pqaQSguMk6onafd3UF3rrOyeZQ0neGnUyY39ohbu9S7Y+pbB6xOMiupZrlaYKHexC8EdJjNKc2ZS0mctdZ/cUW91z8or+CsPhGE7Aopxh1aM9VVzX5qzWtYHFp2Ze16aowabOvUsD1R49B5iF3ExZ7uge04ydJESUYaTVi/RWKhkhSqyixFvsc2/3jjKfLY6henAjcS546c4hcXhmy+9gbPxcz17QMjQxZxnSkzbdGBcs/KgK+IARxg/DhzmSl/R3GlYevlkDOG9kYhnrnUTmDOSu6/V/oOvcyDUDGRjtFYU5Fmy9whJ89FZUZqTdiawh32ft7tuiuCxUh4QnRKIpl9KZ6q7EaKCcSyCE6h9tYvmaTMJGX2u1QsWQ+loikncjaipSuQZFVsWZekYqkK7HeR2omhX95kyUPvGa0s8/Z//+/4+Pp9DPaEuN+RO/BVoAoNTlaQ4PFNDc6jzpIFK/RtByvqbogRp55cegDOCaIefOlGp75cUVQT3hktfcE1ZK9kb/cCtd05y7wwggKDIxmpLIXRrEgqasyodppkQXKPFJfiD0P/co52cpfFdqG9xalTm4ySQ4mGXgnkidU5J5oVfuHpxzn+2hu8fXOHaTelaZYNzaKnrljf5MfuPUZdVeZ3kBJtpjhLmrfCLBv7QdVEf1GMSJvFCviYMzG7YjyikCCSiwzC2eunEG7pUXx71qYR86XR7OamHYVpVwLrsDenoiWo7/aTRSzS22QmAl56HF6ZEqmDx6vQFap3wgp/nxNtsg552xVGsOYSIHYj+nK6y5ntaaISU1VWvjILpOL165x5TAUy+7/3BnJ1gS9dPc9fXXiEWj0yCEgTcJUz50lxdmI4K75zBvEOJZO7jASHqwK565AqEEIwik0ujbNcUrK+IajGkk6F/VhVFYLQaTRLpL4bHzxSOSvqy4PNgp1aGVQM2qYEjaizVEoOfbwkFYGa+UaZHZEmXCW8MN3i0+vPQGeL0CXFNYIMHDrL6DiztFLxqUcf5vztG1z4+ks8/LFlqkF15wNFxHF2ZYX71jfu6EVpMfk+ZA3H2OF8MIaEYLt/Qb60pFAxRfMu61Jxd+vZ4+bOGbMFX5fN7jVp0SypZR6z2KGYz4OIM9aFmkFJmxJtEe5lhf/P4M+hwReR/xb4eeC6qj5RPrcO/FvgXuBN4BdV9bbYOfZ/A34OGAP/WFW/9UN/B7bgyNBUwXb7YqqXVJnGTBBrFJq2XcwFppw2wzoYcxhBJNAWSkNpeRiMnIuXlwjJe5BMVQVC4alLtt129rW3Wbywx8bJB/nK1ef4dnuND66ept9xenmwiEOlHPHiEG/pQm57HlIxwSud62yap9J5DqW7bvVAf/UEz1QWyyA0TF2kiaE0Rp3l8mqMBTUhTEHT8vx1IfahFri5p/SIiNUxJbg8fYcfYgV7MmG64Djih+SpoVsaBG0cREVmxSjjoKOi4vHV4+zsT3j9V36H4598lqWzx+dN276JaK+pD5LCNJ73WyD4CsUauc6ZXZG/g/2NAsHcMmVYl95IqU2K+ZhK6c5rT/f3hSxpqajmhDgtaGdN76nsSu8GdH4q/epo8K7r9Ec5Wf6/wP8D+OU7PvfPgN9T1X8uNgbvnwH/J+CvAw+WPx8F/l/8kNks9lILTQXTUwPUwRlC4WwX6XIilEy0jZa3Dp1j4M215aDwiOrgcckktSnn4vaopYNuYylyspy5dhm8IWJOofvmGyz8yU1GgxG+qfnI6Wf4gze+yJnpKkeXVgs/yRna5Ap9ReznCwZEuBCsPig1Ad4Tu0hVO5DePiEjlUMIlib1TFspj60Y7A2p2Msta2GhpHU2FcDImQHxkFM8/D5fAs4DXQFIgqdNrbk8iom+wL4OZ2lt6jpCFl7gBs+eOYt02Wj8leAGHgkCM/M/QwSZWj1AhOWFAQubFZf/4BvcWFvknp/6CPXKggU2iel027zFtFevujnrV0r90Vut2huR0l13hvCVFWL9muJDVjrvznmydmju6JWwTux9ikA3U0ub/aGrSE6WWko/8UAtlS2x9p7XDw0WVf1DEbn3+z79C8BfKX//l8DnsWD5BeCX1Qg6XxGRVRE5oapXfvjvObwp6sz/K5Siq648EiOi4IvJUe93O3AOD0zFiI2+FOpNENDARCLqIEUrbnvO2SzqnM/UOKF68S2Off0miwvrxRy8YTQY8bH7nuU/fu8b/KPmI8bvCmo1R0l4xQs4ZwHoxDD+FHHY7uaA2HVEhFCBrypbFLmgQ67YPeHQaLxpK0QTy2HEdhpzxq8bQFB7C1SMJS0oUuxlNBcmsslWbK6JApLwjYeC8kg/Wcl5JJdGn7N68U3d5WdXnkInilTWA8qacFMz48vFPNCABBN+qYKrHadWVmkHyoVf+W2Gj5zj2EcexoVA2+4wnt4sJ7/O2cl9S2DO+dKMUiTSxUtNtFgYSfk6kWLgUb4m22bqncc5j3PBTAOlkDk1410F5eeiVrU4HwpEbX0018sdELr4F2+yd+yOALgKHCt//0FTv04B7xksOSvTWcfioCnryDErg4eaIAyAEGyeihkdZAJC7c25MgRPBAKRQRXm7h0iUDc1VRdpnHJAJGVnrGInaOngu1feZvWr11hd3qRqqoK2GTBw7+pJ3jx2gi9tnecnjj40D+qcE+o8Tn1pmLmC1vQLuC/cM66pD6FasUVoLftC6VcxTbpVrKbgc8JyM+LW/m009OxaK3ypXC8IRApKmHJCXaHKTzIkey05GuLkgrN6KMMcu5XyukV4y9/m3oV16uzIbUJGvvyeUmMlbPxH7aAqabMzZgAxISNHM3Lcc+9xdm5e4bV/9SabH3uSwT1lDEWRFRjCbU4sdp6Wk78nOfbs5DlzmHl/rafQpztqnqzQtjODx7GmY0xKVVXklIhF/2K2sLn0agpZs9wJFLoiPmvj7F3X6Z+7wFdVFZEfcoD96UvumPw13DhCUrMvqmqPiLkGuiZQFYBi0Gu5ncdjvZbKCW02XL8Wx3BQ41RxPswfBAgDCXSaGQXHtJhrezHBmL52keUvX+DE6kkG3gpx70MpIG1xffzkk/za85/j3M5tTi1tQmO7rKo5SJIFTaDO6gfNGRVr1mUVwtATx1NzaFHFqxXqlAWTUkKSzomAWjyvlusB5/M+gnHLfF1Zn6RLVE4M9iyd86QJIaM+obE077ANATWIVIL1cpzz5K7ICkRRB98aX+LnzjyNzjKyZMCFb9V4ab50zyuBILjaoHAXTI6tKUPtkDbDTFhlgeWFEVvffolb347UTy4QjizYYi8wbz8FQOh9w3TeXEvRlKx9T6oXhgUXCoXfOGEZLWqDYO9VwUugKosmA+Tyu0pdJMkspGLKZZqCBU7jw58/DXuX61qfXonICeB6+fyPNPULQO+Y/LV63znVrMww/b2jIgdXJmXZoggCA2+zWHw5YnunR58yTTCo2Ze0zNAwh4rQRnOOaXwmVZamkZXx65dpP/8aZ1dOMKxqfFVboed8GVenkJVaaz5x34f4te99kf/16OMMdWjeXSVNlODB9YzWUsP0Ml6x3otrAr1TuPbS39jhQoWrPbmd73PltFEWw4i9bt96Ea44VBYOXKIACBwSy23LMjMIhyPnhKur0uw0jUgyhVtxsbdUbYsp6iLHZRFqQRY82haGdACJiht6Ox0GntSfELFgtgNBK8HtgB5EGDh0nNhYWmQ1CFvf2mLc3CQ8tkE4MiJpJsaI4HChAg6RTDBwR8ROw7JW3kHj77l65hhaHF96R8uihhQVO3XnmhYLUl9+pvN2KvX/3ntFH0rk/vTl3vVf3vv6DeAflb//I+DX7/j8PxS7Pgbs/Ej1ChBzsSwqXKsuZmLZvb2zMQRBBDT1zHJQG3w0qgKNd5aWYdysphAna4FBMbaoC7Fx4Dxy8Sb5My/x4MYphqHCO5sp4n1ljicFZvQuIN5zcukoK5tn+P3r30WjVdHBVVZ2a5lqVfotztuD8k2Dq4ItMufpYkev8NNoHXaDSA/TISmsQ+8cQ1+ZZ1Yq3C4AVwibSckx2s8zApUtnA6kK+5GwaPRHGvwvWqyONRXxUxDhOdml/jg5klzwRlZUOAFKiCARpAkZA/ZK1IbJq9kCBkZONhX0n5CWsi7CekUxuDGiSPNKmfqkyx+a0z3B2+Trx8c0ol6OLsU96ihhb2V650yZC0gSD/ZwG6HUfJR+7vrSzInBTUzEV7vwWaES9vMgi8aFle+njtQ7h9w/SjQ8b/GivlNEbkI/F+Afw78ioj8L4G3gF8sX/4ZDDY+j0HH/4sf9vPL87M3Xm5EFmXOISr/LmrpVsDRhIAv8wa9F+pCvAwihuKgc7181ozX8kZDRaPK5PxFJp95kXuXj1K52na4qprvSta4oxhwW/0QqpofP/sB/tOLv8dbty9y/5F70GD08xwzGqxwzCTMGsuVrrZDouJdhXctOZtrjJdAkt7Q2xgHjl4K60maqSQUf7OIxNA3GFBvxnMEO2W0Kybq9MVwcYgpIIbNsFEg2yCk6tAPeCaZt2fX+atHHkcbZ0TMUhuIQB4bRSZ1GVmq0MbhKyFuzUAUt1aTdzrc2KDqGI014GpruJqEAdwssuJWWAur7P7JLlfSBV4ME7r1mtCYKrKy2RTmy2arr4BYvaa6GJWIEMucnR767u2lYko0laXR3tu68EXaAGUERzRPOJE7JrSpOe709dCfKVhU9e+/yz/99A/4WgX+qx/2M3/Ab8EmP9m0q8aLdfFLpzkXJ8JKXBlEJDRVZbQJV8ReWtipQoGMS5ML2z1DMPRn5+W3OPjtl3lg5SQi3kznxCZjSc9JK2le8WCgCgHtIk3T8LH7P8TvPPf7/KOFNRq/iHMGTSbNUOx2ehZx7qKlDKY3IlQNsQwphYh2kHu1nuYivLLURlSoQoUPA7p2RuOCpVXel13FKCVknb9OspCIeAkQFWL5+vI1mWzcK+cQsQbfa9ObPDAYUXc1sujt1OzKIu/XjQcaIzCKeNJOawvQGifogQEnfqUhdA6dJrRVsjM+nCSskZoU3VcW8gIPLixz0rfcmk2QjSFyLDBT5fwb17j3ng3q4MveUNKsnOlSNBa6E9rWGI8x57lJR8o6h7dTNtfKFDtiaRWkbDVoVsfuztgYygKzNjLrIjnl0tz+wdfd0cGnIFfeSI3DyjGsfJna2zcDmcuDUcrYawg4mwzWj8R2PZuXQ6+uUr9c/tYrTD7/PR5YPUPwA9K8uKQU3FoWelE9AmZrCp0o4jxHlzY5cvZx/vjid/jpez+GUKGNcddyxJpfGNmyR3esCZhx6pCMGXY7X2BNSie/Tx+sL5+KaZ0PNTHOaGrrXWRM4kzMpd9jC8mJGKwu9vPRni9ljUxF5ukJPX9OlRf33uJvPPIIUgnqMqkT8rgzaBgMHl/wUJWFlpJRQxLkEXArIrmwh6cJ2vJ+K4GZ2vt3guusUSoqSAeaEotVYNEvkV9Vbnz3Fr9/43t84G88g6BMZjMWhoN50FjxVFsvLmeGVTDun6Y7mpzWhVfKSe09OUd6B//54lGbroAwV432hiijdzAR3nn9WWuWv9DL3mRm0ASWBhXLdWCp8oyqQFUkqOYeadryabJpWwWvMutWVdpk5tNgJ1DtvQ1wFc+VLz7P5Hdf4fTiJtkJKubf5ZwjBNPEqC+KOe/mNrKuCJW02DGJFx49/givDisuXX0NYrKFm4ygJ+rKgFH7o0VU1vcJxIdiyOdRX97XXJPu5lws5xxROwbViBljxPm5e4v1CsruUHnCoEHqCuqSbnhB6rLYHYWFW8AH1w+EStxmgueAI8MjphFqgXHCBUd2ivrS4OwLgZGxilFFa4U2m75mpmjEuGiquCgmCaj7RWrfI1HRrjRfZ5k0SeRZ4trNWzz36mv8/CNPciwMGDUNywsLpn6UQ5tWS7APnXF64Z9lYWVOpvdF+9RrYHypBe1n2IQxmE9N1jmsQt/Hf7frrgiWfqG0XRndnE0GS1YcUDtHE/whPq42HqGfMDzNmUnMtAptQXmyQpuVWdvxxme+Cl+9yrmj91DVI5sWHPqMxB5AihFyf+xjC7sUg10/KwKHSGDQBD74wMf5vd3Xmdzegi6hMRn9pvQy0izawywsXNdUmPMj5NzR+4Dh5dChxBdWrneGkCksLSyw2+6D9kOHXJnmBZoUkpK6aOiWWkNXAqAZV4e5qV/feAOrAdQ5/mT/LT509F5cU6PjjM6yMZkDUBqWGmw5Ogn2PrWkfSOBKaRZNiZ0FohYZthmXAdpbOBF7rJNL2ttt9M9kwO4Snh57ya/deMNnn3iCY40KwzfHNumMGcA31nomxdc7/qiarC5fa3dw65roYAsWqKhDwotAE4udZArKsreWP49gDDgbgmW8j9JYZZga9axNe3Y7RItRlVpnGcheIbOMfSeobehnE6ExpvCcuidIV2Y2Gs2nvD6v/1jFl+dcGL5GM7V1FVDkGC5rotkUbP2VHOjD/RabC2+yoWPlq3wN7DBsb6wwuq5D/LVK8+TZzODUftmaBYbrzc1WKqfAe9DjbiKLiZy6lBJRr13RpOxeLSASWWnXAiLjLv2kOZBWSRibADrdKsV/jnhVHDZ4dTj6xpndp5IOYmksAVaidzcu8p9x86ie5lcGaqWq2zwaihSbWwH78YteRJhlskDhbGi0f6gmBlg7HOgTI4JpwlfGbKm2VSmeKB2aIAvXbvA89vX+MUHn2JDG/I0MzwAvzOjd2kx0qXvV8n8lMg9d4xDjX2PKiomaaagXv01t2gSC45UVJFdioWp3pue/ODrrgiWPpfMGIXbiWOWle1py7iN5bQws4qFumaxblioAyPvGQQDBKoeOcOREPav3ODCf/sFTm4HNpc3qeoheI8jEFyFpypWRd0cWhQHImo1hZruQr23hewFZxiqoWPB88ixc5xfW+TqpddwWdEukcvcEe89UtwqiYV/VBYOwRvsKwYmaBCSJPr5hkGCjZFwnoV6yG7XlgdcMCIR0EwuM2gURSUbiqiFyRxKE7egPoigwZcJyJnX4xb3jBYZVSvGEAhaiInFI612tjsHOymdt3orxQgTRXczeZJxpabLRTsjUZE6IANfTmhDMDUl8tBeX0T5rSuvsKUT/vYDTzBsHdIJ7GYaWaJ+5eYdHXtjJfTdexOs2ZPukVKLz8OuvmY7LRL9/Eg7JVPuvQashsscchK1oIjvdd0VwSICTeUZlvHNddFqW8B07HWJadZCxc5lnIF9n+fQwE2z0bm3vvkSe//665zzm6yO1nCuMtTLV7aIS9/Gy4CYlJgiSRO5S7ZDlxsqJSWzG2syYjNRsrSmCp6n7v8QfzC+yOz2DZx6HIIvJ4lzZo1khXgBJHygKh5pmg3KzQW5kwKFWrfd5rovVkMOQgKnZVGUxdtUuMbm1rg7eiioNS/VlR3XH1LzicVYrgq8fPsCz9z/sKVcQSCUCQWmrjNzicxhk25mqJNDYE/J08KmpizeBNqZWEw6RWMmB1eeVyJUxi7el45/c+k7rCwO+LkT5/C70QRoSY0JEGFhZ4A/mBV2cD8Ps/SwnCv1R/9289wnGSgnRp77UGsBj8D+3p/Yh0FUysvCNXyv6+4IFoxmX3sxQwpMyJWKuOvWpOX6tOXKZMpeZ8V9qzCJia5UscF7ZDLh6n/4IvUXLnHfwikGVYN3NrLZTC/KcY2hJ94Hgq9JuTPzhGDuKWWiSSmode7W0kGhj9jDcBJYbZZYfuApvnbpBbSbmvlDgYdF3Bx+7a18zB7Wl4lbcV47zwt732PN9pAHoWZfMrhs3lxCqYV0jv7ZgWP8snlvQkvKlszk2xqXlsJstfuEvM/mygmktRokxcIFqMoJ67EivsDOUgsaTM9CAsnmYaBdKYuzeayJQp5FSIJ0xp4WZ/Dx7f0p//LVr/Lk0eN8cv0s0hoQMFdIiqCTzHBpFXnxKj1Kdcists3QzBf7FMwCJ/VBcsepUaAZuhiN3oIpK7t46Kjfjz8HmWtk3u26K6Bjgz1tR1WFSWeFXA8BT2NmlpShdyQtmpWUUHHUJIbOMX3xTbY/913O+A1Gy6uIr/F1MIEWvTTXFr0L3jhSPQKFEnNXGnpmGCdZ0LIrO5TsAjGBl8JvUjU9SBAePHo/f3TzbR66+ConH3iK5Iw+XkLN+h9dZw1T5whaWZDGDhdqJDuscCmDmLwvehBPTWSSs6U/3norGjLZBYKLBn8nN8/XQU0T7wQXjEuFs3RPsonSXty5yNNH78Fluyc5ZtxCMElMTEhlHgVVZSCDpox4E9zlmE1MFpyRPgsop84Yydoq6sW0uhUWAEl5c/82n3nzeX7u3OPc36zPuXVSOags0B0ebcC1ntGu0I07GFXzWrG06efEzMNhRLZw5kmU2gaUioFzb1USywSxAq3MU7hY6C7Ke/uG3RUni2ICry6aXqRLmajQJmXSJSazxPak5eZ4xtXxjKs3t7ndJrbbyOzabbb/hz+k+U/neaQ+wcJgwayUiqu9U1Mgqsih72/uC7/iKuICwTWkWUI12liH0iNx2qMtGa9GU3EYqpXUiIqVr/jAAx/h98ZvM9m6jM9ikHJRVIqabDfkgkg5oapq61u0rWky2pY4npC7rmg1THMRXEVqKnJujQhYefC+uFt6cgzmHKNGf0HMBihrsjrCWRMzdRH1ZhF1afcKD6zdR9qL0BfLnaLe7JUK8YpcYfLlYm+WDmzGjWQ7bVMbkdL9ttnY1tiVSixF9A5tM8/fvspnXv0Ov/T4s5xtVoldNHAgZXJj/tbiBPE6t41dWj+JPHfV5MTSnyB2mqqWQbkllUJ69x7mtUoWo/ykO1oMh3+34LM6q6+BLKV7r2C5K04WVUpxG4oepbjKQ0nJbM+YJWUSlfFkQn7zKvcdTNj48hbNYBG/uoGrG3CVudcD2rsfZkD6wHDzxhz92q1MB9E4zyxOoYskrziCpTa+UE2w3bPXrvTjuFUdy8NFlh58lq+/9Dw/NlpDFwaIKH5QW+c6WwFvC7H3KgOdzgijEaCEZmCcdHHkbKmHI+CqhqgRV0ZouEIQJGBoG5ns7QSzuC5jNIphg3KYtlzotjgzGjDwS2Zo3iWzQRLmwilNJW2cKko0DY8Eq+m64pbSlZ8di44nGRzdz45RVdIs8sWt13nj0lv842d/nAXfGGfSi93Dylswgo3lLiImAapQs7w9ZOvmmLQ5Mil0qVHmELgWxUGxa81lFksuQIDN6LHZn1qGQ/Uo27xe6VE37Qmp7x4ud8nJYg+qmHLazcceWFM5Foc1y8OKlYWG4Bzt6hq7Wy0vff5PeH7/CtEzJ1uSk/n0KrgMXq2VRUyWhmjp36j1KDRpSSccUgWqekDE4VOGnKyfEROaCiFRsIApCEqhHlM5x4Mb9/HyxgrXLp83gqAAMRb3k1ILaNHGi+DrYEU9ufRYpHSjpRhhWTDW9QKT3NoMmKqwcZ1DQmVHpxMoOynB2+kTAq4OVrALlsKJ8NLtN/ngyQfBK34kUHvyyDYlyT1+bxoVatDGAk33knHQRJBQ6qHaDs88y0g0loSrbdFHgd+88By3r73FP/jwTzAKzbwvpLOERGuWukIh1gF9lNlCbxNr68fx37iKxlwM8OxPFyMpRjMlKSeMzRRNpBzfwVKORcIwr39U5yBADwwl7bX+7051gbskWFLO7E5apl0yIwU1t49ZG80L1wmDYK6SIThmGU7daLln+SH2Ftb4dzde5OXty+QuW8qFJaMuR3KKRgd3DvGWi3vvwbt5KkbO5K6zI987qrpGxRsMWnYlcp5btqo3nUouzANK17/2jqce+BC/e3CRdHvH+h1ZCqnTlz+KJhvJ1tQ22at3Q+nXtQ1unfdBGQwaZjoz+yJfdBpSgoySe2cpmnKdQ6XWKyrdbcnsphnSHrA5Om59EV8cAGZlIcVsu/SiR2qHFDhWWyWnEkxtOY2DQ2cJ1zHv2OdpRtvIQZzy7577IusH2/wXH/5JgjfzDSJI7WDBoY0FaL+RuCilZir2t9G6/JuLp9BXbpFjsr5ISYtjtlQ9qfVLckrvOC16a9bco4MYt2w+ur3cPauNi3rTBfQ9OpN3RbBohnEbSSLUhSfoHQwaY4/uTSJb446DWWTaJnwXOT1uWWgWePzYOX78vk/wcjzg37/+Za5t38QTrF+SDAmpXCBgdjimUSlFel0hdQ3OGQI265Ayj7KqKuM4Ff1HttykpAKli11OQqt9BBcqNkbLLJ97mq+89U3SeFpy5MLi9VYImw6loESVpXVZs9kcGVdjjo4BLDZL7E3GPYxnp1vAHCeDMQKQ8nOlP5kEVwec9/jKKB+vji/zxNoJ6KR4FJQhsEXpTLJaTaPBxDmA7mMpUjLNidTl63po2gnUMmcrbE+n/Ltv/SFPCHzyg59EnC/9mQI1l8KeUF5nwCg5mi2AyzwhV+qmpSPLrNxwxJ0pKSZmXUfblcnWOaPpsKsfi81Sl5KdPimZYQVqzjCl/9KfTjEXn7liuzTHo9/luitqFoCclMm0QxZrlhvzsJ3FTNtlkgqxywyDEfZWdiYcczWXqgrxjsYFPnj6GW5Nt/nNC9/mzNbr/PjJJxmORjQlGKw4LPBxWSQo9KO4g/dk70lF8SviCL5hMp2ZfSRm9iBis2MKE8z6HEXoZ7CT8PDR+/nSjQs8eu111gcPAQGRjASQ7K3p6b0Z2IWKrp3ZruVMSOZK0SlqhfpiPWQvd4XpUPoJKRbSpUdcJsbOtDelJ9EPnhWB8UB4a5T43mrDsyce4joLbFybUkdjHuMEqcskMszmVQdG/ccdQsKGIzl0psjIaiCyqSRdFq7oPp/91h/xyRPHefiRDyLiSF2CTvDRZAUKuCzGi4vWj3ELzjQzTc/29mbAMM1IIxw9cZLd75zn6tPLdNJrVYzi4p3MaxSgmFgUjUyxxe1Hjb+jAVn8zfp1oJKw2TTvvkbvjmARwfvKUI2YaeqKVoRZVhabYB12EZabmga4//XrVMMFXBtIYoW2OsfRhTU+/chPcf7mG/zyq1/gI0fu5+ljDxFq66bbIg3kkisb8QOiQHRiTcdiMoG3Tn5AmGG9ACl2oBYXpc9AgUAVhETwtiU+du4j/PY3P8MvbhwnbKyjUgaT9tJjMeDBS0WrM3LqiuFCLAIw6MdSj1zNm9wmi2k+ymgaUhcLq1bLCHGM3uIcNIG9Bce3hhO+2d3kufNvIM7zzThGU+LEcJGPPXQ/H4qLrF0+IIReSyPkOpmxxYGlYCJi/ZdYGre9wyMY7KvKq2zzx9/+In/9zAOcuv+xogY1rlkSoBFcU5H2OlKr5R5bCpHbwhJXKdPVBJcgV47kMn5Qcfrcvbz83GV+58Q9xWnUcuJCEjeuWQFscjb3mNILwHgdRYN/Bwdszg3rETeUnejfdZneFcGiqsxiYtEFWlX2u0gssGwIjsXKs1oFBsFMvo9d26MLQ0jeSILe7Ik8VpM8cvQBTq+c5DuXnuPFl3+fT596mpNrx41Bq7HQzzMZD05tjF4u9j6F5gIeYvGcKs2ErptRV7UxWVVLelUQtwJtOhfwdKw1S9T3Pc3X3vg2nxj9BDpwEI3aQmm8StHr1HVthSk9vHOI1mR1dAo30r5R62O0hVvkLBICTjOpzfb3piYD367H/Nvt1wnZ8fqLbzObzqBt2X7pZUIIXF5a5vnX3+KLH3yMjz94mge2Wo4cKGs+4IOQ9xVaRwhGsZfWVqNzFZmIThLSWeP4Ob3Kd9/8Ln/r4adZ3ThVoFmF1npaQtm924xJrYFi0tHz3RBBs6WAUinaKLJeoSOTDoxCzSePr/HG1g7fWjt2SFUp7PBe0zJnGEdLjecOMRj7QctpY30n5lB1H/sxv3tlclcEC9gGsHvQWld6wc0hZAHzBUumka4mUwZbU243Q3uTPeO0NJhsJIFjdbDAJ+/7GJd3rvObF7/DPTfe5MdOPsbyaMGabr7/PlM6Omxoq4FjZmbhqgqXPTqbgXOE0Fgjsd+0ipXOfIZjr5PAU/vEI8fP8cWbF7j/xlucPPtwQabMPikhmBZejZKSCs9r7gOMfW3l2HDLLLJutj9VIGmhmTcVlObt+WHmzbMNTy2ts/+9t/h/X/wTtrb3OHVmg8m1a7jBkNxO8YMRUjn80gKxa3nx+e/y/Deew08OaHA8c/o0//CJD+NONTTXpyzOomlwghBS4YqJwIGSJPOlg7e4sv82/8XTH2CxWUfHIAPmUL0RKrGTJtj7QRJaOyRavebVlWkBGTdwsOiQpQoZmQumC5Yera6u8re3r7J1sM3rC6sAJTE8lBH3JYf39rMLmjz/vFCygAKEWDtND7U/717f3x3BYjuoMp3FIt5SKi+sDAKu8tTeM3A2JdhfuMlUK8ZqlP1MfzIkehNpV+ZzIMqp1eMcXfwZXrp5nn91/g/4xPo5njr5EKGp7WSQXKTMziSxItTB04rSKkZE1NooFwjB1+RUuv1OikzY7JU092pJh/eeAZknH/wov/udz/H3N8/gl0YQjfAoGK9L1aj3xa4SChVGgid3HU4CTXDEypSSmWxpnFMInltM+dIR5SXpkKajenKZ720dkC8ecGJjiXZnh8c/8jRN5RjfvIVbaNCVVQZLQ3b2p9bRLmKyg2vbfOmN82wPHEtH1jm4dJMHhqv80ur91ONICg7djUi2OuH3tr8H8RY//9BTjLplg5A9ED2iDu2skLcSooAiteAqm+qWRXFNqbM8MHCw7q34r+SQxRzVdDa148yZk/zS6xf53OYqrCwWNa3QKTaJrDRkWw0EEUQTs3aM9xVV1dhYdjJ70wld7ieFCaOqwqG8Ud3lJ4t1Za0gm8w6umh2NdMuszqqzTEyZ5YHFXuvXGDLrSI4fAhFb2IWQ0lNXRh8MSvAkBjvPE8fe4j7V0/xjQvf5MXnfoefOvUMx9aOUNUO9YFQmQCs13EHb/T/SVJ6spSolh3T03UdvrIGocGT4IP1eUSMnOlE2Byt8Nbph/nGG8/xscc+TPJFFSmYzqVPAETo4ozQNAhurl3pJdV7VWaskUUprJfgcZXndw+u8N1Z4tFHz5F9hbpMHjpSUsZZcYMFbu8fsHF0ndMffQoqRzUacN/mBq9+903UCdcvbXEwnrK5ssREj3Njb5fL5y/SrA15+43XuP/H1vjY4grN2OI5xsRnb7/Eet3x0bPPULUDtLPuf2ooMLtDklLGHRTEqzxtZ+hZWKqsXvBiDjFLpuNhpjAtqVHlYVrGZniD1x999B5W3rrJsY+cw48aVE1inHPi6o1XmY23uefeZxEN3N69wMFBy8b6JqPBBqqJyeQms7ZlOFyhrpbw1chEe5L53VHzruv0rgiWeb6YkhloJyuoZwJ7s0iXMoPK045nrF86gPUjlsrkhO9/gID3wZjCiJH9nENcMaiLmbXBEj/5wI9zYecy/+Nr3+Sha8t8/OzTLCwuIWmKa6rC4q3ICJIyQzFtRHbWcReBRIcnELsZtdT0qAxaBomm3mpUqZzy5JlH+fyN3+LcjSusbxw3zlRVuuRir7OqGqbxwAADZxqaTiOT2DFOkUuzCS+0V/m4Xyd5mAZlp5nyfNpnbyexu3uLLnf4oeOt515k5623GCyMaEon/+KtdW7vHTBaHnHsyArfuHidsydOMlqoOL66wivPv8KlV1/jYO+A2HWkyYzujX1ElX+rn+Xg1GM8uXyGtWrE7958kfsXHE8efxyv5h0sjSc7kFSc/0XQxqhMkoXcJvzQI85Kc9cYD0yWgnkfdBmZWpGfCy2Fg4wsCbmMLZRsFBw/VY6fXOXa73+TYz/7EXwdyBpxrmJ58Tg3Z7s4PFU1YmXpBJtr9+FcU6xeAynDja03OX70QQbNOl5qQObtgHe77opggdLR9uZCkkStllAYzyKzKMyS0ly4zENh2YRM9k1QuEnOmV+x9NBhMbYga3GNsZtQ+8B962fYXDjCd6+8xL9+6ff51LFHeeDkA9RAGFZmGiE6PzVqJ0xjz4PyQEDp8M7TdS2uqvDF1E7Ke3GocbZiosnw0JnH+a2L3+CXVv4qrrFutgZKP8VGt+EwfpcPRmAUYVA30FU8tHEvqwRy5bh4JvBHeo2r12+xtAzpxg2+/pnXeOwTz/D8577AzsUrLDgh7d5iHBPiKkazDjfb43bbccPVyHDIef8Co8UFzj37BPc+8wgn7j/NC5//Em88/yKxPbQxPX/+PP/yxi0+cu4xTh45zhOLnifOPILbr9BGCpJUWM9q6VMeKGHR21i+oMZCLv0fXweoFZYOB6uSiwo2Wx3na28/e2ZpJ0GNChMEjRCiY3NtxK0/fJ7Nv/KEETkls7SwTNM8RkwtMe5ze+cii4snGTRrttGlGddvvs3aymkWF04iYuCKc44US5PnXa67JFgKEoLOi9ukRZvuSn6rysYr11laPG4LDEvdsrMzXgoUbM70BldGNZ2Im/sQGxvYoyw3Qz586glurR7nj954nvMvXOaT9z/Jmj+Kr5jziIQyck+tydnPUxcxvpikSNfZSATvjMBnVD0jbIbgCcD9x+7l7e2L/Mnll3j2wafnrz/lOE/LfF2ToonfpMgOUowk4NTqJrK1xcubkZfPerrrjr39LSY720z3ZjCZ8uoff4PZ/hSycR+Hi2sokfHBBE3K7Vu71HXNvedWWHnwYTbuOc1bL7zKdz73RWR5mZ/4ax/jQ3/jp2mnB7z98vcKUmQp7e2dbb708nMc273Bxgee4b5lYZQENzNrpf7YdZUFhLoyunvBbJd8ELS2HocsWe+GaHSj3s3eMoKMGxX4u42l+SkFLlckGh0qo9RLFWt14uaXv8PqRx6l1SkOoQrmooNUbK4/CMCsO0DV7Kaqeo2VlZOlDh3PTcdTStZ8fpfrLgkWi2YT9RyOLsup2OkoVNt7PHwguKHBw1nKdGIxBMoVVrHrN4as1kCTXJAP5gxZX8zd6ipwdPkYP/vkJq9tXeDfvPZlPnbzLE/e8zDVqME1TemKC8NQoRTZQI5WnHuP4qmyErspUtXgglmCpnQ43aquiLPEh+/7MF947re4b+sMy2troOArj2ZDYYITcyVxDueq0s2H16+9yeXxda6erHnTX0VfjjQp8ca3X8GLErsxIQyYTj2pmzI9GANKCI7BwgDnLEWdzSKaE5fevMb+rR2qreucPPsgjz7zONcu3eK7X36Rh598iE/8nb/Fl//9r/HWy69YZ10Nat872Cdcucq1pUvciMucHByh8YUjF0olP/JIdQiBJ1V87dCm2L52Cd2PxY6Jud4nJzPDyykj0TY9jSW9K26RGpy5ysQEtcepox55Vr3j4KUtNj9ogRFzoq6BnimL2scFRn7g7FG0t60SS+cVpfY2bezdrrskWAwC9IghTCguFZsayWQHZ9+6zvrqMq3D3qh4UlLqEEgYGc4XKNfUe5nMHQo6EXwRTWlxYDFXfqGqGp44/gCnVo7xrbe+xcvf+Rw/de8znLjnDKDUzpC3KJlJ4XGZy0lv+empa0dKHUTTfVgQJ1A7gUIQBtKwfuRefuX1L/OPnvo0wQ9LXSVFgOZQFWLXUg2CgRiu4sETp3np4nUu59v4SzfYvrnH7uUbdNMD2mRo3KTbp6ob68+4TOw69vZ2yUR87VlZrqgniclEIEeyJvauXaJKLTe2T3LsyUc498R9fPtrLzLohvzU//xv84Vf/U+8/q3nS3fcdvjtg31+6/WXadZX+dmNI1SVx2WMU6b2JFEx7hiCi8m4ZouO1NqwWukKEzz1ngZFgOWxfou39IvaQav4gUd9oUlUIMlDHcqodc+gFoQJs9evsbtZ8Wtf+yJ18IirqH3g733skwYl97w5FfrhTi9cPM+4i0xiy4mVNd6rZrkruGE9DSXdQc2JvauiKlWnfODWGBktkDG2qi88r1hMGlSFTrToGDJCplIrOCnEO+cczhvMrF4sZfKumFbAarPEjz/449z7xEf59SvP85XvfJVu94A863BJGbqKOtji7pstnhIwYqzn1HVmqFAKdyj1kmZmsePBow8Q6yXevPBaWWBFgOYDqKMZjoiFmkGxZJo08KJENh+4h4MxHOyMGW9vE9sZMbZMJ2P7c7CPdzY3M2lmMjlge2uHbjwl5My9x9YZDQNdGwkqbC4PqHXC1ve+yzd+7Xe48uLbPP3hRzl7/yncwoD1Zz7AvU89OqfZ9IX3NCvP3XiTG7JLbgrtBWMPaCwZgSrSKG4zkJuCIiagLX2PlJGpOb3kg4xO1dxhJmqjyDuQCUjtyQkbmdGKDVrtFagz4EYLe5k6Q7V9k+biDpPxlBtb25zeWOPU5gZRu7lGv79yNhDp0s5t9mctp9eO8r0bl+a6/x90/dBgEZEzIvIHIvJdEXlRRP435fPrIvK7IvJq+e9a+byIyP9dRM6LyHMi8uwP+x3GDzx0O8+lLrBRII5TV29x3/Iq0dkMRxviqJh6KiOaCC7jRQutXnGa6f8v5Whs+ULPpn9YSGE5G2UlkahD4OzaCX766Z/h7eWGf/+Nz3Ll9fPkyYQqdwzKOIoeD+2n4BrJxaTKB9OJ9S/E0pfUdSRxjJqGI0urfOyxT/DVq+eZTg7IKsUEDyN0hlC62uYr1Hn4k9FtpIl89/kLXD6A/du7qCZSiiCHDOpYRnWHuqb29fzEmk0m3Li+RRsTg8aRupbdvX0uXjlgY2WJB04uc24t89JX/pCXvvk8a8sLnDh+lJH3PP3pT7KwvnrIZk6Z4M0e6sXLb5rzTSO4oSPXGb8YkKHDbXhYC9aEbJW8ZyeOnSgG8VNoNK42oqcWu1fK5ieh5985XChMZ8X8AKYJ2Y/oQUIPIu0sc+XFt3jhf/hVxt/5Hvtb2zxx8l5+7MHHCK5i7iVd/hha6dhcWkIkc3b9BJ+473GTefxZgwWL9/+Dqj4GfAz4r0TkMQ6nfz0I/F75GN45/eufYNO/3vOy5aY207xnANt2hlPlY9d3aNbXMWdNQ14SNpjUozgycdbikgWMFHSs74ZTUrBeYKqa8U7n/CIRG4ZjtbnxigaDIR+/71nOPf0pPrt1ni986/PMdvdocqRBCWKDkCRUBlk7wHtCCNRVRdYO45kbHyk4GwPnvGNzcY3Bmft57sVvQLRmp6OYWqhQDYdWl2XltekW37j8Frev3eL2fiRub5Mme0xnE2LqTH7sPCFUiDPfrGpQ0Sw0ZaSccaIGg4Z7jq3hkzIc1OzsTLi5c8Arb25Rec89xxZ55MwiF776Nf7g3/06Ny5c4oMffRRf1UgzxFc2isMFz2hpAWmEb1+4wPb+2Nxd2mwO/OuBcKQ2Ov5uRg+S9U1ahXHZ3b2RWXt+WZnebfXKICCN0Zi0cTDwuJGfT6PmIBqxM8E+ke/evMpvfPPb/Mpn/5Dnrlzk7GNneObUMdzSIr/6tT/ixQuv4uabGkUpm8spKQwELly+QttN0DvskX7Q9aN4HV+hDCNS1T0ReQkbUPQLwF8pX/Yv+XNM/9LyP0aIywQf5o4dx27d5ul6ga4Fl4wl64tXq2CjCRQzv86YVY+ixK41X1zNeAnW0U86Jz7Gsjh747ti12B5bWEZO+c5uXKUzWd+hucufZdf+cZn+dSpJzhx9kFSFehH4RlLTObjwr2YOm88G1M3A6rKxir4nl6fM4+cfIjfv/Ab3HfhDY4+8Mg8/8wYzWfSdkglnN+/xc3tXSY5wP4O4fZ1Yjcr5GMDBNQF8yBDaKdTmtpDjoX+oUzbyPWbu7zw6hWauibGPU4e3yC1kVu3D+jayOpyxWA04qmHj3NzZ58X/+grPPPpT7F6bJWcYbC0gteOHBMxRt547Qr3bG6yJ2NOLK3BquKXgnmLTY13ZfJ/8x4jqKVR4nBeiWTbHMWZE2jjzN62MXg45Wwj+ryZWNBmcgU344Tzr1/ltWvX6dKMM0fW+NDZE2wujew51pm1/QUq3aZLaoxz6VX4zOe79DNfjq0e59TRm9wa7/Cty6+z307+7MFy51XG5X0A+Cp/zulfdw4zqtePzMU6PSyMOJwqn3zrFgsnT3Jztzv07K1MX6/OkR34TKlLIr6q8ZqsT6G5zDSB4CrQVFi9ghTjNjOtKEYWRdIrWJqWMUBqEGo+fPZpbh25ly+8/FXOfvMSH3j0o/ilJWKxNw3OHocWH+Hga4aDftx2Mvo8NmRIxbGxuMbmmYf4/Gtf52+duQc3bOx0E6PCe/F0uePrty+xe+MWadoi031SOyPFznhUZSSG98HAkJToNLK/P0G8UFc2szKJ0HYtF6/eZmFYkbqO1aWaDz5yjr1xy7Vbe8QOxuNZ8RtXtq7e5Cuf+wpP/sQHWFpc4GCr45HHH2Y6HvP6+bdZGDaEOnB9vM+5pvRH2owmOzhEfEEPTbGqiaJSzUavKcpYdUDjcNmR1KxjUbVnnIXZQcvl69u8fPESl3a3WawrHjq6yS88+QgLqVBjAD0wB85J7Dh+zymqSzuM6obHTt9f5AW2AXrvD8VhKCujJdoc2Bkf8NF7HpxT/f9cwSIii8B/AP63qrp7aK8Jf5bpX3cOMxqdPXc4Bij3dHPl5M1dPlwNEBeYFF8n7735HIsZghscWP4iYnMGtTK3xhRx3uyAUteZXVLhX6WccRiNQrwhK64MVe3tVBFLhQQTS20urPMzz3yaFy59l1977rM8vHmGhx/4EDoIuFCRNZGyo+2mDJrGds3kSDninTezB2eAgMPx9L1P8OuXXuaVl7/Dox/8BE4cXbYmXAiBtydbXN/fY3b7FjqbIqMl4nTHJg+XvtFgNER8jZMK383outY2ic7smkZNw3BoXmWTWaSNCZzjxq19NjcXeHDhOBsnNvmDP/4Gb18fM51MWV5paNUT93Z54+svM5tOqGrP0eOrXHhtQtfNePDZR7l9aY9Lezu8ef0yr79+kXvvu5fjssRSM7SzVosL5CTjh95MABNI5W1qWJetCaygNUYrQph0Ha9fvskrb11i7+CAIyuLPHbsKH/l/nuoxCMpk6dWJ8aY8INgiz9l0iSytnyEY3urXN/d4V/90W+zPBjwqceeZX15nZRiqYMEUTv17l0/zgtvvsyj99wPd6zrP1OwiEhVAuVfqeqvlk//uad/HUYOxbi67O5ArZlPv3GNhTOnGc8y06QgpjBMKF6AAh1Gyg3AUYVCOOy5VyKmVc95rhXJKZOzfQ+hKi4o7pAUWVR9PRMVBzl1dto5x+MnH+Pk0hG+9NpXuPT13+VjT3yCZnWVflT3oO/P9MRIsnX6vb0OXzaDzYVVHrzvSf741ec5e+MhRpubZi3kHD5UvD3Zo4sd0kbUBzRHSxFLeuXF40NDqEckbYkoqDVJTQRlkOhDmxv8nZ99lstbB/zhN17n2q0dFOUPvnyejz52Dw8/eY6PPnmW+uVLvHwhMpsqPndcv3rFrI9wzA5mvPbqJa5dvEI3bTn/3Td45OwZbsV9fu/y68zijFdfeoFfeuBD5NTYQCfnUK/QqdnU9t5l3rKDKJGUFKk929sHnN++ycsX3qKdzrj/5BE+fvYU666hqjzGlLS0WQcgtRCnidRGvDO/ARVB05R6oeGRM/eQrlwwh1PvmaaOHOO8wO/19yKOx0/dz7GVVcQJC9WfgxtWZtv/N8BLqvp/veOffgOb+vXP+dPTv/6piPwbbKz3D53+1VNEUEPFUHj87Rt8YGUVqWrasRVdSU2brhnzrXKmIQ/ii2bBzce/+dxrQhTV4pbvzI0yKkiaEXNC2mzjuLPRY7IUTyqKU4raaFdVCGJzGH3wHFs9zt985ud4/tqr/NqffJYPnH2K+048QNXUhN5EzwYkULmKTjtybBGp0OBNE4Py+PGH+dL+bb7y3a/yUz/x81B6TaJA3dgm4D2aOrSdYB5oAAYWOO9xweNlSFUFWhGEgzLoyZNi4uLNPW7vz3jq4dMc3Vxla3dMmkWGiyPSZMru1i5n7ruPlbVF6vAKr1/bw6fMkw9v8o3nLzEZT9Gu5ebl6+ztHSACbTtlGDzjWUuD1R63DvaZxBkrS8vkccLXFH24QCNzvbjN1oRbt/d56cpFXrn4Nvrm6zx1z1n+2sc/yOrCAl5topdE6/Fo3TM0jGSaY8LXoTSjhTTNuEVTumpwfOjBx/jA/Q9b6uv+9DK/MzPKCmujVUSE4N89JH6Uk+XHgH8APC8i3ymf+z/zFzj9q8/fbOgmrI4n/PzVPQYP3E92nv3Y9T1+9tspXioG3nozmntjBkfGZqD4oowyBXExMSiAgfawofe4GC0VyxGJkeyM9yWSzW29IG0+daWQp7ABIuIrKlfx9KnH2Fja4LMv/wHfu36Bv/rEpwgrK/b7y+mmmgmhMsFSSrgYUWd2RhtLy5w9cpbXdv6Eh994jTMPPkTOkEQ5SC2pmxkbgGzBVgDMHtnr57l4F9DsqAdY0EzG1FHJLrI7nfHL//FrHFlfYXF5xIn1RU4dXWRxEDh1/33s3trj2L2nEe149qlz7H71Za7vzbi5PWZzrWE6mbDXKtvb22SF4WgEInztxZc4vnaEh+47w/54SiRyI8+Qg21ONasGZjTevMiikINybXrA869e4NWLlxiK8NjJo/yMDhjt7HLs+Dny8jIyMY2/quIGgTSOVstk20hzF2HkobbFnZ2SfZECJAqy6ebDsHqdSy4TBe6cT/lOt/6e3feDrx8FDftj3r2t+Rc2/cuXwj7kxN985TInjx1Hs9B2iWlnKVNwxbDOFygwW+/Ehq56khhrNcXeYd12PMm9ztpEXeRcxk4ESw9ywAWly0Zn8Zjjex2sp5OlN2Irw5VEUC1DcpxwfGGTv/v4z/Hazbf4zW9/jk+d+yDHT54pPSHbDJzzuMqRU0dKLZIDVDXeOU5tHGPaPsznX/kaf+/kSZrFJUKhXeh0YovAC5rN7rXvqAffUFUDQtUYfCsO7x3J28jx2HXENEG6GTu7O8ymU45O11gLNcMTDQvDisFowPL6kIOtXUYrmxxVOHf/cc5/8RVu77XcvLVHbCOSzSXFB8fy8ojNzQ2q4Ni+ucP5q1e5evUaXddx8do1zq4d5Z/9zN9mWNeknLi8t8vzb7/N629fZBBnPHbfWX7pIx9geXFIGmd2Z5dpmnuJN64Q3L0kzP1GADotREpBZhQipX1OR4JOxVK+WCgZk444mZW1WGjiZTeW4qI/F3vdQZrMmgk/4AS687o76C4CqkLlhI+8dZln3QCaGhVhf9IV50dFc2TkbCKYxYIRFTWneaNQMY5RKjWJQ02n3mvUXQDReZGc1eDcLFpSapnruGMXyUIZFa6EYnqhUmakiLMF7Byry+t8eGmFpYUlPvPKl3ni5tt86PGPQDUkVAYgiEaCBlIXic4mZokLHF3Y4NroJjunTvP8i9/igx/7CZz3dLE1czjN5BhxsaXECYINKjUDjUKFL2Iyj+Cqipw7BurxwxGIMhwt8MxT9/PUoydZHi2wuOCoa1g7uU53MCN3HcPFNe47dZRzp29waXvGwd4BB7enRhVyZlO2uDigbVuapmHt2Do7e/sctDNi11F1M7QRrk72eO38Vc6/9RqDSeTc6aP84rNPsLK0YsI2zchMSZOINgF34hT58m2YtkDxKaCgaMV8L6d0OKjJCZJKNuHLqRCEXMNsMmGYZ3OpcY5mcOgKQmfeYWWep6bi0p9RUWLq3nWZ3h3BoiCi3L+9z9+4PqY+dQLnbGfcP5iSMgQxyv6wsiBSMeq9z4mIjaroj11VCxPnwBUUTXt2Mlhw2hZth6/zOBvRhVJG8blgwys0I11nNJvCdxI1aoZ4VyBcIaiNj3vk6P0sVkN+75Uv8vZXfpOffvKvsLqxYe+zvI5QN6Q4ZdZNaGSA9zWn1o9Ti+NPXvgaD2/dYnnzGAKkrkVTi+aW+UxrozwQmiE4RyKVee7MZyc6VRZrh4RATMqsSxy0kWs7U67uJ964tUUlmSfOZRY3VhhuHKfb3UaccPTYUZ55aJv40iUOjq2hs8SN6zeoRyOawYBRM0AFLl3fxyE0o4ql1VVm4ymnTq6zsVbz5T/+Ix5cP8rffeYZFheWjDRaO7RyRfot4D1ZWnIHcmQNfW4fmc2Q0ciallHRBFqZB4AUIZk22IY37ax31glUnpQyOhrwW1/4DHvfaXACbYz0jqYx2emT+55WcdrPKRfkM3Nja+tdl+ldESyVJk5Ptvi7O7usHF1mYX1AVQ3Y356ZEXhZkKgynbXUdWVQLBnnwqEfcQkMQecM3n6mydwoocDTodBcbEQEfYgUkl15Yc7ksU4UlxIp5jJhWPCqtDEjvsJJgUUVHImzK5v8zSd+km9efYlf/87v8Il7nuah+x4tfAmDv70bMW3HdGmKiLI5XOLK9jXOPvwkX37uG3z6J/8639u5hhRnfCdAqKHt7GRxgZQj+3tb4GFhMKKqa3A1znuOLnhWfOJ7b16k7SJ7kxlpPOZL39jlxfMXePjcGT746Ak2NxYtrVWol5bI430GC0s8/NBZnMscXa74T5ev40WIbaQZKDd3JxzsbuPCgMFogHZKCBVuwdOMRtx78hSfPneOSp25WmalH0DkFBuAVNtg2Pago5soceSoRovozgxZWUSymFm7NydPHZsc21UFTWu8mfp5u6dS3P4HSzWzNybcTDNraJbmo7mIqlFpbOsjxkKK9cXsT97bGPyuCJYkjvH+mFffOM+lzQ3qnR2OrC4wbqdc3N9hezJjFEbUrmGS4MzSOnW2xl1yRojzviBMmoCMOo9TrGnpbdZiP3cFOXR17E30nDf9hXfGCkDyfCxfn7J5BxSrpDZOaXPLUITswMUSqMGTRTi2uMrHzj7NK4NVXth6ize++gafeupTjJaWEFWyKIN6xKwdM5nsoc2A06tHub2/w7fDazz3+gu8fvsG5A7VZDVTcCWQHRCZ7m9TVYHBwgDVMSJmqudwbKwu8eGHV3n75g63r92kIzOoFxjVQ549d4a//hMPcnRpxLBtYRrJ9RgJAYJRd0bLqzz80H0sL4744pdeoV2u2W9tXMPk1jazyZi1zQGjxUVOnjyCCkz3D2inE2LKdopo8VXzd3gg52yEymmh3+cy47IO5JNLpFd3CfcdNXZx0a6IQqqt/MaVA3Zq6Vcqt8NVjlQLjQuMsmN/0hHKwLDcN+OEMlZCmR50dF1ksFCXzKbIQt7DOOyuCBYVIW4c58GTkNqGvFXh9h1pzzGctOztjLk9vcJePGA/TXjVAcETQkMzWGBUL7A0XGJ1sETjBoxCw8B5AwQKwuYKOKCqZo0abES08/0MdDe/YeRM5b0NK02mf9Hi/IITJCcUx7AekcvJlwGfi0kcjlx5VqoFHj/5AMNBzaSd8T9+/TN88v4Pcfbs/XOmQN0MmUw6ptMDFkaLXExjPvLUx/j3z/8hk9bETLnYLdFP1xLF+wGLy6u266aO6XRKSjBaaPjZD9zLEw+fxI/3+S9/4VP869/+Ort7ExoRji0NObc6ZDELDRB8hbaRNJ4QGn9HPZxplha555F1fvKT1/jlX/5tFpuGrS4z68zRf/fWDpNpy2w6YXllico7Yuy4deEmacNqwFw6974qnmvRUEwJ3tQLpfcSHXB0QH5jgiwGc4UhkNqEi4IvgaViLI6k5m3mxaHenquLih9VbAxHVHnHTjMUjzdPBCc4MWMUFRiOBvggheDZo5fvvk7vimABmLrA8uk1TqSaWTVgMBAuXRmSB4usnDqNy5AQ8x6eTplNJkxnB0z2dhjv7LNz7TrX4pSxtkQHoWloRgssDJdZH22wPlhisV6gCTXDUFmjzEFyVoOoKwbilGMfg3mhMASUQlcxVMWHymBLdD6r0OguGRMumcpz0TU8unkvr968wMee/DhfevU7XLx5kY8+9VH8YGikm+Ei7Wyf1LWcWj7GbpwwOnEGffMV8JUVtQnzas4eJ8ri0jK+HtC1LdPxlNhNqetMqGYcH1U8+sgJ0MjJg5YQP0SatlRdy8bygNGwpqkDlQSca3D1EJJND5Yq2WwaoJ21qHecWBlS+yFh5lhZHDJtp1ZzVI7J/pgbbWRvZ9/QtbpiZX1AisX7wIkxiZP1wCSIeahVAA4/EFxlUHkaOlxONiJ8QRAXcElNYelB22RFeiGtUovpk5LaaD4BGTiObaxRX99mmnuCbi7wsTCdduSoDAZVGZtn/bVD4753v+6SYBGSCLcbz9F9oakdvglUoZ1/xUHXcnM24fTqOs1ghcHKMivI3OfJ50K/bzva2Zh2csD+/g47t7fYvfoqV7oJ49zRVko9HDEarbCxuMrGaJ214RpLw0Ua8Ub4owSGmumEGbYK6ix4pl1XQAZDynqKTN8QRYwI6AtMXYvw8No9vHL7Ah9/7CO8fu1NfuOLn+FTT/4YR4+exHsYLC7TTsasDYZcvHqda+NtiBOTTYfajPR6A3FnaeVscsB0vE/XTgElZYyHnRSdTQkrCywPFvn4jy0xvnWDrfPXOdiP7McpPozQkbOhT6Em01kKq7ZJSBUIvuLmpYu8+eoNfvpDD/PW7SnfevUNSNEcdQTI0M4mjIYDJvsHDNaXObWxSRhWkBy57cgDT+XN/V+qCinDgySYLVKOkJ3SBcEfH5CuHOCfWkWjLfScLWD8yJstkmNuvRqCx1X2/a5x+MXA0ZPrLFx+gwNRxAb0ICJMJzbRYDBowGWqwhbpdUfeHw56/UHXXRIsoOK46pUnBsH0Dd4aS7kU7LupZZJaM/tWy1d93+kWAQ8+ezR4mtGA0cYmqwhngYTSdR20M8bbt5lsbbGzs8XOlQu8mp5ngpIHA8LCMiura2wsrnNseYPVapkGIbiAVN702UmZTCcsDEeEYB0ZFSCbjNiFCkGt6Vg0MkbIFB5dP83re9e4/+g95M3TfPY7n+cjZx7lsYceR52nHozQOOPMyhHkxsu44HA+UC2u0LUVVZwS9/bwocY74WB3hy62hODxfshoaQ3vPO000+7uExaGuMYT2xk33rzB5Qu3uL0baRZqUjTNvxslkt6mWjAvMlGPDxVSNWVKc0dVK3vTlis3d02Q1hlx0heVpyuanTp4NpeWqYcDCB4GnmoUSCTG047RaMFsYh1WTDqHL2P5kkAOEBfBXdvGhXUkdbDgzLSiEB/90KET83fzlSd32QzSwXpabWZ9dZnN7LjmjBqUNdPNDOwZjnp9sUHSzlkQG/z+XqFytwRLYdpe66w4902YD8bpCmW/9hVnVkY2QMiZPZEWc2xVxYeerWyFoBYNSe+Q7+uKatDQLC+zefZeTuVkE4bbGeO9fSbb24x3dtm+covd9jpvM2PPJdqq5dSRezi+fprNxXUWQ80sRxbnxEjmJhY9KtePsbYzypO9Fri549zSUS7u3yII/PzH/yp/9L2vcvmbt/jU45+gWqjIzrPsKz5y9AyfjxN28x4km9+epnuAjeigzH9ZXF4zTY8bUfmaELzVBr5BFjZhuosLDr+5xm/8d58np0DdNDx071GeTB0nfWb52BJ5YgVwGA2tqYkjVBUSPM2w4siZdQa3D+huz+anZ7+ygjpi6vBOuXzlOv9h9lXaD8KzJ+9Fq4L+ZRgfjFk8smDDmDol73XUgwYqU8bOOkErIXYdQydkMbGYdECxYs8pkasyXqOkzao280UPEoygXqh4aG2TV3evk3JmVpz2B01dCn0xy19nGUGKCV88Fv7cRMq/9KsssitZ8Ml0GYhx55zzZIRRMzCaffEpVu8o+IoZv7XdfDBn5f2852KzSYyXFAuNvkdGcghUdWBxYcTi8WPGM0od2iWmBxOm4wMO0gE7N66y8/J5LuYxW01kFpRHTz3ByeVjHFlYYxhqcy8pZn8O+30+Wz2jYiTOrNbRP7mwxo3929zYv8lfe+yTPHf5VX71a7/JTz/+STbX10gID6wc4dXRAftff4Vc5rCEklWnlHC+YmXtiHHfYsC7Ci9Q4fn6y5d56PiQlKZcvXWA7E9Zv2eD5WPH+d7blxgcTDl4dcz+ZMwHZi1nZhNWji9QF8M670d4V5PVvNw8cPvmLm9fvkbS4necbHaOaYksJZ21JsA78cg6v/Xcd3hw4whrg2XEOUIt7N86YPn4klnOTpM1T0cBV5xyyEKuPXWXcd483HJO4CG1nRnu5YyLGRFPGkdkKdhcGW/9G+0yaa/j9OoGC1tXuamZGCNNXRVXzEJzwZA2V7h3OClUorscDeupB7ed6cPrYnvTZiGXoaLOlYGixcmlt0lyZMQlE3gppBTpklmoxpxNKKaZEKxzbwFkaZ/53rp5o880LRVu4Bk1NcO1ZTZEyPc+BEmZ3t7i0tULXNm6BK9d4IXqbQ4WPUuDBdYXVjm+tMGxhVWWBiN8odf06aSIlLTFaqAjS2tU011e3XqLp07ez+nVE3zupa/yzLFzPHzmfgbVEveePsnLX36BpNA4OHn8FK+9sWcLSAQvDTmBq4INHHeZpoKt3W2+9Z032frqd3l1e4+feeocZ556mP/yH/4k/+KX/5A3Lr/NpNvne1cTkzThI3KWcyNhbVCZVEHL1ABfcfHyNr/2u99ia5zY7WaIE7MZ6t9fbwABiPdk53jxtTdYWVnitctXeKpeoG5kPvFZJoJ6eybaOGQaqRcCs30bY1c5h3OZFEGGNa6bkcTGUOQuwSQRluq5C06eKm6pQqfRastkUPHmiU1OvOa5kiKV98U3zp6/D8G+t8hBfHGoid17T/66K4JFgRACB9Ezbac0zsbQdW0k51QgP2c+T2Vib/Aej9CpGUUIGadK5R2arHFoebWAllkcZd6IUzuGyQpOSSplsG5hHfccMMrkKCylG22u8eDGGg/6p6FNtLf3uHX7Jlcnu+zsHfDSlYu8kKa40QJH1o5yZv0kRxfXGFWNoTXR7F1BCWQ2Bss0dc3L19/kgbWT/M1nPsWX33yRm698k6cffpKGilOPneXW1W3yxbe5tH0V1LwDPMLA13RFFgCJPNtD85Td8QGXrggf/uA57pu2nDt3FGXC6XuW+N/97/8Gv/Efv8Uff/VbXB/vMr02Y315yD0PbNJ1NkrQp2Q0Hm+S4rd3x+xOOxyeugqEYCO2RTD/r7IYV1dWOHVymYsXtljxQ17Yvsn6wVFOs8zV67c4eXyd3Fod4Ye1+Rt7oVoa0u3Z2LsYHVXtYJyR9RqGCYrBoRdHDOYDYFPECs49y4hC7CJePTJy1N7z0MoRXty+RBIteiWrcw39Mj6HanERKuTK99B+3R3BAtYMiiEwyZHFlNDkmHaJmFJZDI4qFB5XLu6HYt5gZhdaxjj0PZVsFBD63knxuPClphC1o0Ryb8pXpq0kCtco2785jw8BNCFA7ky77b1nsLnGmePrnBahjZGt2S5Xr17i4NJV8stv8l3O87WhsLCxyonNM5xaOcbGYMl2z1yjCqu+4uFVz6u3L3Jm5SifeuBJ3ti6zhe++xUGKw/y2OPneHX3Oa5m6+QjzBWHdQhITjYFazYlx5aDwRAWVnklKc9/4TuMlhp+YXPA6Pgiy5VnbWmNf/BLn+Lcfev8q3//OcbTfV66eJUP7d3L4kZDlRqTdGvGaWZ5bZngAynPcM4xm3XFCsr6Us1wSDubEBTUVdy8NWV1Y50jawMWFis+/+qLPLt6kntPHqceNEXhCnka8bURJOvlAQdXdwiNJ83MODFvj5EjjWXarkxEzhlXmzpWvDns+8YM/qIo0jgDU5KSu8SDZ86wefsKN4qZnAWCZSoixVMOYT7064es0bsiWASrqzpxjHPEAZMuMu5Mo1p7X4YFGeXeO6Pa95Y4pltxVGWj0TIOrde8a/kFZdapzZnXhIqb+41lESRH05tkg2k9hcmqNo5NySTJeIKNiVAzdHNi6cOx4TrH7t+ku+9hLm/fYHdrh3pvit/e4eall/ly/CbThZqN48c5c+QU96wcZzFU1GHIYxsP8NrO27Sp5d4lAxJeDYml4YAwTqwub3B9clAo/4nZbMJwo0ZnMyTPaNspqpnkA9XCiElnPZLl5WV2X7nN81+/zHh/z9L+QcX52ZS9W7fpJLMjFbe3x5ySdRs710V8SiRtOX5sg2ZQw545N/oQ5szdFDtLcauKpdUlJARGaysQO4b1iIVRg6hj8+gaq4tGYaHYfwmO3Jp3wmB1hKtqUjT1avKeOGupQkUeWI2iQJ5ZM5lxNC29F3RmyJwEqxWzYoIzhNWlZR4bLvOH3W7JXsr0gb6IL5LyXuN7mK794OuuCBagLGxlmqwxNJ10xOLqCBkfTF+iakpJVajEWKPOyVy70jNwq+DNF6znBukd3K/C0bLubjFP1DyXE6vYbmUbuJpvsRp7GQQXBMr8R1Hr7HvvrWGZEs5VPLB5mrR+ituzHW4c7HDaVTzROabXb3PryiXeevVrPOdbqvUVTp44wz3rx7hv9QwX966zP7nO6vIybmVA3N1iMtlnEDwj35BiS0dib7xP5WEo0CabAJbFEFvxDpmA7s64ePUttuKE+0+ucd/jZ3nwYydolpd58T9+m91Ll6momIowkwxBaLuOhUUzAREPo4Wa4aCysRgwd5Ppae5aJginrAwb4e03LrA4GjA6eZrj62vcDi3LNOYWWheulmBpFFY3+CzUC0PSrCPTkRRS1xlqFQJ5UCGCubQ4gaWiQi0FvRNX3HScmVtMIm6xwonwgXvv5xsvf4dpYwOvRCg+CRZsIg7RTAiOqjQ23+26O4Kl3/2dIzlHl+HG1oyZZIKvSNnET8aqdWiZSBy1s9qkUEycWrNKnLObIVLMCoxa4so8FUHMJb/8ehubZ3a93uW5hVJWJSpUBVSw6b7eNDRaejxaRGVZ5yrL3lXMezg6XOHIaIVxnHJjususWeTEuQ/xsHji7T1uX7nC1dev8K0XXuKg8awc3wTvWNUjXHmtY2l5gaObx7hw/mWG9f+Puv8KtixL7/vA31pru+PP9em9KdtdXdVdVd2NtgAIEARAAiRASgyNaEaaeZqJUExIoaeZh5mY0MOEhiEpYmYeKBdD0WlEkaBg2Wi0r+7q8i6r0pt783pz7N57mXn41jlZgNAFkEIoEjsqI7PSXHPOMt/3//6mwaQc4ZTQ9w+Pdlhu9tBlwGcBawOZSWAwZrqxgcoSmkstzl84y6/+1S/S7CW8+fYt3vr+27z67i1OtLo8+8wTvHXjNvujUnwL6hg0CwRfoxNFmmZRhuvF2TFOvhWONElRJsFWlsHhCKMM3lpG0ymTqmK0PSBfzmWomyqBjAGVG1RNDKUNFAs5R/cn4AP11FMNK0HNnBPhRRUk6dh5OciUUPKNMrjgxd3SgnagChMl4YHVM6s8cavFG0wjL1DLAFN4ubjgyVJDmuo5UvaTnsdjs8Q+g5jUNNgdczAsUV5UiWk0pxBNZPQujk1ucFZeTMycch+CmCV4H6soArmOzobxYxhl4k0ReWIw90kWQ2xRYRrjUEHjvTinZIlCEsEVM/2Miqelj9HT3svcBx3m3l0NnXKuvYwNjv3xgFvTQ4p2yupzT3LSPIufVAy2t9jdeMitjXVu33+APdFhpxX7J2MYj0dYJVHUSmvubW6weLZFI8uoXYryJdQlk819eieXWDy5RrPfQrkJ/+o7b/CtV97n/vou3TznU9kS/5svfY7VX32CV19bYefBDkmR4/xUBo2JmEBMjo44it7JxqSY1FCXNc1Wg3a7IDEpSSNnsH+ISQ0nTixxuDvEoHnj/Ts83VmRQOJUofMEZyuRhE9dRAs1ITc01loc3duT98dDtT/GjUt0M5fm29RgvYRBOaHrh8oLvT83UQouGTG+ErhZ2UCiEl48dY53774vbkCRmR6XHc1GTpKoiPL5Px8NfvDi31VVFUkayLIAZWBqS6w2FCTMzmwdCW+yRmXCr7wXxxZl5t5bAjnH0io2xzOqv0doDkbJ7QFK7FeVpFeFoMQWKGrenYLMpOhZ7hpimI0C54SCkURDi6C8uIjYWP4Jrity2GBZyBss5m0qHLujI8Z2ykLWZPHUCr3jSxybXGJ0NGB3sMcrgzvc3bpHOZ5S1iOstXLye8+0rri585AnT5+laRzDumRwtMOgHtNpLzMcH3Djzk22tveo6hqF5kyzw988dp6nz55j8ReeRzU9L75wnneUJWs2cUqsZ0XupnjzzevsHY4AOQx86aKTSk2Wtbly9TQ7+4d0mmscHAwZHg05e3aVcjDm8GhM1l1ltH1E7hoEZUmnAdPJxWdNAzagnRwGWbOg3J+ICGvq8IcVOs+iF7KWG4TIEA7gLeLJMKlko4QEP7JoiyQt54FgHKfPnuT0+m1u+SmzUzFoTaORkaTyXlqnsOHPAzdMCbqknSPLDEU7Y6kF+wMnlIYAxuQi2tHiRplqOdFrJye3hBYZCDEbRTFPvoUo3w7So0hhBuKY7h85IzJjskotG0PlAVFkeh+HX5HKrSKdX3yymMPVSaIIJFKKIeiPEJY1KhjpKazA3+3OMsEH9soRdw730SrQSzJUmrK6sMwTiWOvHAupUcZHj/qFENg+2GO53+NYv0/pHZu727jU8OG1W9TOEoKilxZ8/vQJTk0NT+xrTk5zWi+dJjvZxVb7GA3PPHcek2ZoQpxnOY4OB/zuH7wpZnmRj0bs36rKsr97yMO76xRZymBQUk+nTHXBw51DmklOq13w7vYm1x/u4WtHUnvalQIdhDHeLEh6DZpLbfIkIRxNCaUlQZGPFeb1B/Qv9UkichkSg5pKr+iH0Q7KgPIK08nQRuMzxAVnKj2o8h7TzPn82YvcuvmOGJckhjQ3gq76SJ8MkMxyf37C83hsFojUXk9bp+g0o91WFGnKuJahklAsNEZpdPCieotuIRGunxmIIROzSJKT90YYWkHcI1WkOwTnxI9YRTPxoEhTYQwIm0QUfR4p53TwouAziUReREIfPojReJDfcz766RrpZZhh/AFQicwEbEldOcpqyqSaMpyMOZoM2Z0esV7ustZeZK8asuOGBOWZ1mO8ltr8401oIHD9/j067Ta9/iKV97hkSM+0Ge8egqs5F1qw7tjyJUkjofeFZRbPJFSjPYwJpM0GKaJrl4lTINiKt1+/xsHBhDQ1VLWdf06lFM45RuMJH15fR0XWhFKKuioppzWu32Z1qccvP/Uci2kbHzxpEIN2r4VlXFvHdFxSplDZmvFCzvZ4xHQ8paws1+8PybY0eiROmB5FsBbqgKk9ygWMFttfowJFntNcWMIstmi2C/KppjhMyEYFp8k4phvsJp4sVyJHDw7rFJWtyZL0E5t7eEw2ywwSzn2g6Q2J0RSFopMZhpXHu0Cio6mFh6A1RkczbqVwXkI35baYCzJmHcXc0QWto/GERFSgxXg6UzM5sTiIKASOTKKJho66F+fzyD+LoiYlV7dPxJJHh6jMDIpQ15R1yWQ04Gh4yHAy4mAyYDAZMvKWUjvSNCPJM7rtDp1mk4Vek9XlPs9xjuAsD+sjbo13uJfskxYtDiYHcmgE5iZ7WmnKuuatDz/khSefZHVpicoFKBq02z3KhwP2bMmWLml3GgyX+zRcibq9wYmzlkYjQzcKdJZE0ZvCW8t4NMbVDq0NWWYYDMMfs5gUIWiCd1ReKPmpEgh4PKxgMVBOxtRFCzuuSdKMIkujqjSgkpSQFtJfJALQ7JcNJqMxSmmWr66QLzUwqSaMKnwi0/mytNgjS02g9JaqqplMapyHybRkOCnZGY+xlcVVDhcc9dYu+lxOttKEEL3cnPSnYgYePrFfgcdks6BkQaZVRRpE9psWKb1Gwua4FAWgEqTKOwhe4N+ZIEvpEN0pJVFKR42KNkkEBWaNf4i3hUwoEyNS0+CEFjOLiRBHVh17ooiceIVyNmaop+Km4muq6ZRyMmQ8GnA42GdnuMdROWZCjU5Tkjyn3enQ7/RYOnaOi60m7bRJI0lIFELmDDZazQaCrTGpke9rmLAxPmAyGpE3GqRHKbUXQwcg5m8KvD4pp7x98wbPXrzM2vIS+1NL2mrQ6rWp64qqHBNszShUPNgf0384oN9vEpyl02jMTQhdNWUyGKHTnMtXz5J/6z2KIp1rdj5uJSTzLanzjYJmnkHlCKnY5C532nhXs3e0TVlCq78qcdsajJPbSSVGQosQQCYtEqYTg/eOMLb41KLXMpQPmIElJJrcGYpCUDDvEoLJCSqalvQVYSLm4SEaVpAEPuw0+HBhQJopvHJIMLWK9KNZxa0/cZk+HpsF2Sw95yjSdEYmpShyjJqKYjpI7odBunsV5Id4exmZ6kf9gkHcEK0T9q/RaoYainWql59dPJlnIToQsCGQqciPco4QAvW0ppxM2R1sc/tgg9wbDoZH7JUDqsRTNBp0mj16nR6Lxy9xvtmhmxc0dCqcJhVZyAF8sDjnObKllGKuFlQuEb9iH6CaWnYP9tmZHrDrjlhZWMMHT6PRxo0PcUG4WbOUqlkPMxgO+ejBPa6cOUu/mTDyDm80WhkIKVPn0DpB6ZQ8KSgnliQxVOORaNs12PGUuqro9RYxObQbBa1GQWoGVCHMB5JA1KwLOTHNCzq9HguNJp12i3c+ukky1ZzsnsC0EgEGrEQNGh0zcmovHoZpDDBKAlmREdQEZWA6qsVvbZqgEoXarsW0op8TyiDvm5ME45AYIVRWTj5eECOLkAdcHrgWjvB5ImWgE2ssrWT24iL51ds/Dw1+kMHgYl2R5gWhttHMW8WsQkiNnpvwEfuL4JhnmejgJYEqDszwRL1LfCHMrAk3c4h3RucHxHLIe+oylk6He6xPNhkcDhlUE6okoFsFdZLy1PELnM5Oc1ZrMpPON6fSQYzAgcPphJEWOrsKAeVd5Gt6DAqTGIxOKLIWiTckOQQHdzbu8s76Da4sH+dnj19iPJ1wOK347Wqb5YU1srTgYLCLdzZ6pT0i/4UQ2NrZJk9Szp06RcMYyuhmj05I8xbtbotxrbj14Ag/rTh+vI3WjqQQSbGzjvbKEkqnNBoFSwttHu4dkmWJ+CQTT2Q1M/hTGAP9TotOv81S0QVt8M4zrIb4FNI8EYCkFPvUMPUSTZ4ncqsYMZHQXoNRJJnwq+3U4VseN3KYtoHKSn9YaGikMtjUSpzHDajUEKZaDAkz8VFQWnN7fZN7S5rRqGJ7c58kMTLEdpAXaXQzDTgH9SeQKR+PzRLpKKuTUiSuka9VVyWpFqq+mOPNSqjZgDDI9DzeNjCboshGkEWs8EqGWFpBsE4adedwtmZ4eMTg6IC9wz22yyMOqiFkCc1Oi36zxbnjn2Kh3aOTFyRKMXQVnazAaJEBOIR9EIjkTedJQLzKghezCeQ000BaFKA0iU6R/HZNsI7rD2/y2r2PWOn2+GsvfJkkwGR0xFGwfHD0kCd7p2glKd/LbmCD5+hwhywrSCkYjg/nzorBee5vbJBkGSfX1lBJwtgHlBeG9uBozATH2CjKaZMksaS5QxsRSa2dWyFJM5K8SZIVLK/0aDzYodHIGYym840ib1ug2ShYOr5GU2vW725w5YUTWO/o9boUaU7lAkXEW1Q7lcMjkzmLGIpoofjH298oyY8ZVzVOgzOgKy92R8dahLtHqHGFLxKpMLIAaQLOSc/hPW4swIs2isp43pzuMeynODztTpMsT0U8GEOgQqRQWefFV/knPH8ar+MC+BaIvwHwT0MI/2el1HngHwJLwI+BfyeEUCmlcuC/AV4AdoG/HkK4/YmfJIAhcLp00BBbUx8FVEZBPXNDTB/ZsgpFPAjCNDu9dfTBjaleGkGOVAA3mTI43Odof5f9/V22JgcMlcMXCa1On+Vjy1zsXmCh0aWVNkiVQruakEhEhIm3hrKx+Y8EPIz0Pyp4XC2bo/ZSvomCT6F1RpqaONVPohRaqBoP7t3hlbvv0e73+fLl5zjRX6CyJde3NhkZx92Hd1kwBae6fd483GDiJqwsrIJz5EWTqi5Rk8M4/CRO3wN37t3FACePHyfJEybAtKpwpXiQVQns70856ID2NdNqysJKk6UTXXk9VQIqYWVpgYV2wV4rZ2cvEg6jQ47Whsl4gvY1vaUFxqMu3VaLw9GQ5V6fJy6cZ+zGdJOGsIhTORjE10lccVQCTiPivbGFVNFcauIOxtS1pXQ1SZpKgdHL0F2D3xzLTb7WAqeoS0tSRJMLleDGDp0nkMK9Bw+50/XY2pKZhLRrRCAXPcOUkuGxUmouM/433ixIct/XQwjD6Kb/HaXUbwL/AfCfhhD+oVLq/wX8XSTl6+8C+yGES0qpvwH8J8Bf/6RPoBQkwXPSRQWg9XNqBV4UcihPDJkQZr2SWLgsyO0h9BT559o66vGQg719dg+32B0ecOBGTFJN3mqzvLrM2f5Fltt9ulkDk0jDrrSaC4lCjNYTVoAHlaK0Jk/FycUrLZ/MeXHxt06sYAlyi8VEMIBEG8TxL9rAljXr9+/yys23od/mS8+/yLFGj93xgNce3CLLM071l7m9cZdO0eD54+e4frhDd6FNo26RmpTDvIVWMBjuxdP+EZTjg+SkXL97F0zGycUeK4kmFJrKKFKTsNjMWWwZcBV37x8xKSvGVY/F1R6N9jJpI+B9zcJin24rp9NsUeQZVS1iKo3QYIbVgAf3NvGl5eqVcxxb7HE0GPPF0+e50FsRcmquUJkEqaixfG26I1JtPKjaoVJkg9ayETsn25RHFeWwpNyvyFcLmHroNVE3BnBviFntiGWvVbG0FqM81TTgAnVlefdgl6OlSE2Kor+6cvNIRkJ0rYzSh/9FE/zoXTyM/5vGHwH4OvBvx9//r4H/S9wsfzn+GuCfAv+5UkqFT0qJAfp1zbJOcC7gywqVpdEpRXa7ZKcIC1gFuWFMCBgfKKdjRoMD9g632dzfYlAOGCtP0mrR7iywdOw8z7eXWGi2yZJMUA8VjSiCkAAJwhgLZtbvK2kag4oQcwDvSLTGWofTFuUsMfGeEAKJSSUqT8vMQciHQusQ83HF3sMNfvTBj9lP4aWnP8X55WPsTCa8vn2XPHguLizTThrcWX/AzcEmP//Ec9za2+LU4irh4CGtZhPnA847xqOBZLq4MAPI5LWKrxkhcDAeY7KUlgos54aVtsZ6x+7wkAdHnuAcqZHDpkZxeeSoyilpOSBRXZrtFlmSkKcpjSLD2QkuSoqn0yk6ppxtPNyG2nO0NWD/8JDV85cJ5ZSFlUXpC3V0EW2n4AJ+UktKWAgCVDg9l0AYLbOt5nKTvJVztHXI6KMhxXKDxkJBZSuKpYJZiptwjeIg2kMwGqU9+0dD7qY11kucYpCuXjiGQfSnc3aA87El+snL9E+bz2KQUusS8F8AN4CDIEU3PEr3go8lf4UQrFLqECnVdv7Ix5wnf+VLq5ytpmR1pKfM8x8jadhLqeODx1nLcDpgf3LA9nCHg+kRpQo0sgYLrR5LFy5zudGhkzRJ0zTeSpo0MSglTpVBRXluBAFCRNe8Eq6ZibEXs3Qt56w0Td4Lpdt7vI29kjaYRIiG2hhmvGftA1YHEiQLcbS7y+vvvs5td8jzVz/FV5eOsT0e8s7uQxaygqvdBRppjtIZ6xsP+MH6NX7pMy9xd7DHQqvDiVaP77//JovLHcblGGdrev0ldnbW0UYyGK2t50iVwLwpWd7AoRjWNYPRCFdPsOUUG2qy1JCnCY0sIU8z9Khid3fEyukJRTUhSRv0em2SPCUvDO1mg8FgAnF+5WOiWbtoUtYlu4dH7B8e0skKru9s8qmdbZa6PZgoIbhmYsGLkhgIj0MlCCtjIonQKlM4DSpVeBw6ES+0EYF6UGEHFfrkAtnJDtoGlJv5YIPKhGvmERnF/bv77GQR4CCWjh6UCtRWKpfaelScsYSPIX3/xpslhOCA55RSfeB/AJ740/y7P+FjzpO/uheuhMuTIbZyMK7QzRzjPKm2BHfA3e1dNid7HNQjUIZW0WaxtcjpY5d4pmhRJBkmMSTzmIE46ETNN59DIGUbe5yZpEz5iFGZmROyvLGEmUDMzmc0Wmo1cfKPWS/iI2ZmfJboPQY+iCpvurfH2++8zoejLS5ffpJfXP0su9MBtwZ7nGj1OZ31sHUV6egNjnZ3+eaNN/j5F77A1uiQdpJzor3AYDTGKbjoOtw2UmtPx0M8nnarTbPZ4eHm/fjqSq/WbHcj/UcRTIINJVolEqrqQSHolFGJACo2cOP2DkvLbfJGg6y5SG+hj7PQ6zTptZusbx0gvLjoH+wtdTnmWL/PE+fPsVQ0+dkrT7O00CNVmRBKQ0A5HQmzAZWn2GklgAuihTcNadhDJa99MEBtCUqRdnP6CzK8lCExkGjwchMJFUeBCQQr1KTaWu6NjnDtWC6H2AdrhcPjrPTBzsuczdlI+f+zcqQMIRwopX4f+DzQV0ol8Xb5eLrXLPnrvlIqAXpIo/8THwVcSjVj7ZimA47UPoNRyX4d2AxDbJ5wrHWOUzpluWjRSFLxGNazTBeZys+4Y8FL6UR0QRG6torolEYnEswKjuCiTaufvQturn1x6lF1E0IgqEQ2SyI3z8xR3ykZZBKHdsF76vGI6++9zRsbH7F89ixfeOKL1FiO6gnnuqvkWlPbCVVVkuVNEgzleMxvvf09Pv+p5xjXJUYrTrV7KK24vn6Xpf4CF0+cwe48oLZlTBDzdNoLdFodtrbWY+0tup9Go0mSJVTBkZsEkzXwlZilH1to8sxTa3gM77/3gCq63dzfOeLczpjV0zXO1aR5A5PmTKsxvU6TxeUl9nf3hCoU+zlnLXuH+3xwx3F8eZnfeKfkbKfHF5/8FN2OmIKrQqJYfSU6epUpvFPo6hEbIcR+EefFj82IZ5hAjkBDCa1fq5jsFgeJNkTABzDSu45cye1yxCSCHtZa/CyKJJI8ghfQxvmZiIN59Pu/0WZRSq0AddwoDeBnkab994G/hiBi/y5/OPnr3wW+H//8G39SvxLKKe+8+w52ktBWx2nmTRabq5zpNWgsJmw9HLN9WFJ5R2Fiz6GNwJ0wN8mGQB0USWQWRxJD/LWmCopURa6Wj5SLEAih/tiAT7QaIjOdpYARE6FU/E9mNUprfLxpZjT6alqyceNDXr35Fm5lgavPPU8rLWg3G6wWbQwZ3pdUbopCkWctjEmwkym/++PvcPHKRZTWTMoJFxdWkGg+w53DLU6fOM1SZwG/8wDvLU9efIYPb1/DqIQ0SUmShKquSNKcJCtIi6YI3LQYfZjEYMg5f7zDX/rqZc4/cYass8gbr37IN37/dQbjkrTZYjKtGR2N6a5U5A3Nl7/8aT58+wMGhyN2RiXkTUxdc+nYMTY2H7K1vUW3aPDpCxfZ2D9kEjzPX346uufruV9xqL0oFesAw4DWAdIIeVspk6gtOjXSzCtIcoNPAhjRq5g0Wq1mRij7cdP40hFSBdOAShTbW/vs51LWK61IkzweeLP5kBBCRRUrcLiwM34yIvanuVmOA/917Fs08I9DCL+hlHoP+IdKqf8r8DoSpUf8+b9VSl0H9oC/8Sd+hrzBzktf5K+XFalNSRfbAlx4EWkZrxlOLLqOpY/WcxMCtBArZ40aXkoEibmL6EaQhjGL10TwTti1Sk4XAgQjQUYBsVYyKJIkiRvHxTnNTFSm53MBHTej94G9e3f54XuvsJE5Lly+wPmF4xxrL9ErmnF8byX70U5lTpM10Chc6fneGz+gsbbAamuRSTngXGdRvjdlGBwOGQdPJ80pjMHkOVmWstRtsbZ0nOWFBe5vblLkHYomJGmBVxqdtyBUcsHGEjIvcs6cXqTRFPtSnWpe+KlnOXdugXffuMPNuztMpp7D/TGLkzFJy3L16UucOnmMh7evU3QN//zVfTqdFl0NP/XTLzOclLx3/SP2D3c5n3dZy1vkJkWXQSLH83gQGTBWSROfGkgE5nZTJ8NJrYW3ZT2qKeiZF71FHC8o/FRe72DF8FCnCp+J/Sp4Qd6ahp3xFJfnKD/zlkNY6TBfF0pJqZ4YLcxq9Um5X386NOwtJM77j/7+TeDFP+b3p8Cv/Ukf948+P8zbXDZHfPHAQWlRaTI/BVrNhIU8Y8tGgzc8Oqh4+sswygSZjsv+EXd8Nd8w4tAyE3WJeIw5OuKdE2/eJJH6V6dChUcM7dSsl1FK6CIotLKRm6SoDnZ594M3+XC6xfKp0/zs8fOc6qzQyAqC9fGaD1hb4b0lSXISk0nNbWveeOd1dnJ46eQZhtMJZ1qL8nUaqctv3r9Lr93h2MIyIUiGPBp67RZLvS6Hg0NKX9PuL4rxh6sxaSFDz/g12nhqBmU4mljKIBZKkKBNRndpgZXlbR5upxwMJjx4cMTqyQHtFTl9m4tLnG0a0obhjWuv8vJXrmKnmnd/+BorrYSvvniFpGiQ6MBi2qZOB/izTUySgVWEkYdxQE2lbHZ4tJN+0hgNqXDOQqSc+IlEsOsikYNTam6UcoQgEgcPQnydit+xzhTeRPeJRp9cD6mGR4JkKogC8nkVgVIYZC4nnDc9H27/cc9jM8G3SvHPTIuzjUNO7I9Q7baA1DqQppqlhZydYUkVPBrpEZwWdIMgcdguwogzzy5PRIWQU8Mjw0+lYlgr4h6DD6RpDklOgotYj5RyKuYTKi1T3plZeKIVo8GQH916k63DXc6cOMMvH3+epaJDoomfKYh5QnDUVYl3FVmSSz0e4eSb16/xwXSLLzzzHOPpiNNFH+tqskQUgtYqXj+4w9WTZ+k321jvmDrHdDrhe2+8QekcHk2Sp6AMJpFhoc8KJoMxjUaKDzVpls5vwINBzXAgHgfeCaKVtbocP3uM999/yM7+lHLiOXPmgOWzU1TRRgNJ0aPX7/HSpS69TNNYXuP0yl+iHK3T6RSsrfVp9zKU8FAEStcJSqWCGLpAqCx6q8TvVDCGUMnBRRAPAJUkBOtIcpn2C7M63kIREFAuiO+bCrEPQUJqgxPZ99Ry/vQ5lneHDEZiVmE+rlWZVRjxdhHvhT8Mv/9xzyePLP9XfLQxHCUp/zgrqIzDjSaiXfAeyppmO6VdJOBlgTuYh68SB5Jyo3i8dXKixvLMEPsUkAFjCNS2JqDJsiZ50ZLTxoeImclG0dqg5V/LK+uExq9cYOP6dX7nld/lQTnghQvP8oUzz7CcdSIhL956lcXWJbaaErwnTRooLTcKtWPj9h2+v/ERzz35LLW3nGwuYKNJ4M7gkHt7W7y1cZ09UzFNAr916zX+H6/8Bt9fv4FzgbF1JFmDxCQEG30BvGz0qk6onGG5W/Dc5SVOLRo5NDwcjmpuPxgyPBoT4jBTpy3ay2ucOrMMPjAaVzy4d4CryvmiVSqh0erx3MvPc//d61QNyBqwFla4+uRpev0GwUm8oPBbEhne6kSQd6OhkaFOt0g+1UWfaaCbCUkmxD2TiTOlCtGSNXoRx0kKfmohLolghHyLRqLax5ZQCXu5OpjQPbnGybVjcbAs/8Y5P2/qZ6MJCeiNxosRKPpJz2Nxs8jiVGg01xsdvjme8jPjCmyKjwTKTGv6nYyDSY1VQpCcHwSzenU+9XeSDTJP8Qrzl8B6OWWSNEPJPRMbeSealRBN9rSJNkmCjgWksRzsbvPjD37Mbup4+dMvcbK7TOlKiapLYoZ6cBBqHA5bi89WlhaiedGaYC37W5t886Mf88SnnyVTmuPNNluTHQb1gKwLa+c7HD/RYeuNfZLDlO/c/4CqdjI9zxqAwpjkEWtaXkiJ+1YKVpf5hc+/wNdOeooErl17wP7QcePBiMPhhAfbQ9Y3DugvL5LlbdA5WaPD6UtnyF+5g68sB7tjyvGEvDdTSgZc0KycOsVi6xr2/fd48OQXWQv3KIdLpA0bXVcinUVpFEksbTRKSXmrlNBc1LlAaI1x1wdgNb52qFRLZg4e48HZWGNPxLPaJULF91ZiPQjiKsksrdgGpgcVvZUepzkm72EcB0jvEhshHalQscT2zPQsj3kZFniUceIJ/F6vx1PlDqdEGyccIhXodFKyPUPpBC+P+NScPaxUzFM0wttSeNkwKBIj2vxgVNxI0uPM6JcSMeEIypBqE+cIMcRVGexoxPu33uLDvbucP/0EX42Z9yZJUJVAkrN5TFAe6yoZumlNlhVRiy+isNH+Pv/qze+zfOUMvUbBUjNnr3mfsy8U9FeOkzUL0kYTpRNOXzzGT//Ms1z/cJPX377Dq2/fYTipSFJDkmekafYoGyZqdC6e6rH28qf4t758BTO8TkBjkoyDgwPOnGixd2g5OJwy3BtzuLNH0eqRZ01Im6ycPMnx430e3N2jnlrKyXSOGqE0WjmyRoPPfP15br79IbffvUX/9Ao/vvGALzy3TChFyuxCQDkHaSoUeB15WAGUEeqQAsIx8a+2H+xDDaGYwfIaXwdMpnAOcA5fWUwrFdh/YuXvaB/zRRU6FYf/2gaSToMTrKIxYqSOi3MWjXNO5mRezbU5f0IFBjwmmwWIxaPcMOMs51+0GvxvD0ekmUGkuJ4iVzQKxWDkY9ScisPGOO2P/KC57jF+TOci4hVhwTrCw+nMgX1mHk4gDWIOPhN/WWvZvHuTV++9R2tlgb/w6a+z2OzOsXoftRw4ETwJFG3xzqKTjDwpkC0d0EFRTUu+++orcKLHhaXjtIqMB+ltLp5vkWQKnRh0koNKZSakE/oLLV743Bmee+EMn3r9NP/l//f7jI4a5M0mRZ4RKidlHp7nLyzy1c+f4Z5uoydDgpcMlNUza2R5Tqs7Znl5LJtcJWRpSjkekjS6mDSl6CzxqRee5OG97xFsYHI0loNESSy6s2IVtbjUYeFrL/JslXO4t8fe9vtUdgU4hdIJJknFj7q2okXBiwPkzAQ83taoBLXUhs4AdizK+ijoU+gIE4eJmB5qYyRbMngUJsq1gZCACfg6oL2oN02esmwWKRo549EE5wJVVdFoNVHK4GdrgpmllagwP4kc9lhsFrHQtHMVng/wQb/Pmwf3+GzlxBVba3QKvXbKwchGxoWwjjUyeQ1ONCNWGYxJgWjEF+typUJ0gJHgIa9nvUkasyhVnO2LSGm4u81rH77JXhF4+bkXOdNakn+LzCysQZLIainVXLDSdOJI0hyjEuZpSyFgy5LXXv0h97uOL52/zGqnx+88eI0Ll5uUDppeUUc3SOXEa1io/k5M9KxFuynH1zqsb3ja3QbNLKep4eRimxefP8OplYR3fniT/X6fzXsjwsEGzX6L1lKHbr9F3soZHqYkKbR7fXSSRmOHKSoUKN3g3JOX6H7jDdxowv76LqeeFR9iRYBQEuwQBegkp1U02bh+m+zMz7O1+21W+nsEuyyLUcfEaA0Q4m3thCSrIu8vIEPDBcPDf/Bdjr1wlfTUogwqy0BIJCIvVIowcdKrFGJMojIIQcv8xgf581zjkwkkgWbSYKHbYzgUlnJWFHGkQOzDvPQ/8aBU2vz5QMP0LPuRqEEJit9Z7vPU/ohG3kMlMhjstDPMXknlHN55MZWLA0SdCKwb73yBfkOI/sYBokpvZgg963u0lzcvRI8vN6m49tE7vH20ztMXnuSrq2dIk2QONUOQk87rmFPP3IoJZYXmITRaQXB8QDnPtTde5/XyIV96+kWurp7nGw/eYKce0xsmrI5r8kxTtDJcXeGtRZkKHVNE66pkOpnwze/fQhlNt93g5z5/lroMXD69xJVnLpIkCjc9olJ3GI9HDEcJlJbBg306gxHtfoskS+l2mxwcjjjcG7J8fDU6bIKtDzG6otFu8uSzl3nz22+wc3+PMC3xlLipIEvytRVoZNh57OJZ/tWr73Hh8kvc3LvF5eVFVlSNd45gDM5ZggqkkWAKeq7lmSGOxaklrp9p88o/+wOeX1vh3MvPYo710HUgJBrVEHTR6BCdRh1ojZ86qB06MQQjQUnDbEpZjWnkXY6fOMmDwyHKGHSWQyL8PZWlBG0wKsXkGSrJ8cag/6ff+4nL9PHYLEGIcEHF3iU2ZdvdPj/aO+SrKprraWgWmmaimNRWmtug8NECSSyMZqiG1MhSFiGnmAoSYqoUabQ0ElujePWHwNaDO/zg3ru0Vlf45ctfo1s0pGqI8KIM61X8ml0k33mcryEoEpPhIoSpiDandc29mzf48f59vviZl/nU6csc1Ie8P9ykDorN3Qmr3ZRWrhkNp+hEk2QZ2ntsrXCVZVpN+eFbG9zeGLC23KbVanLpVJt+r0mrWWCrQ4n0RdNutSBpcPHiAqNdzXQ0jopSCYGypaW/2Gd/64j33r/DxctnKHIvuSs6oJTnuS8/y9bdhxzt7THZ26e1phkPHtLuHwOdYbIWqBwCNHotPnfxDHW9z7Buc2OSsLDYIqmHzH3doozAe4v4NQOIM0vwoBPDl/7iC9zSLd6+u8Xr//K7fKa3zNkXL5GcXpq/V8FoQmnlxh6FOG/T4IKQL52n1m0Gk0NaRZfzz3yWd/rnUNqQphlaablJY4WgdTJfA4LIFT9xmT4emwXm1pkhyJBKkBfFd5cXeXH/gMZST67LLLDcyRhMHFYbaq8wTqbxDkjipH1mTgES/pMYcXbRRLcWxYyOKnDj0YAfXX+TDV3zhade4ExvhURJT6Jm/ZSbMaEFnlbxBrOuQilFEo01QFLBjILgHPs7W7xy8y2e/sxzPHfmCkrB7cMtJq4iEBhMavbHNculpVFZ0kmFLq3MhhRUleXh1oDvvfkQVRjyImNpZYl3bh5yYnHC2kKLbqdB0WoKN62Zs5M1CFqTNYTMaKOptzEGjNTs/bUlqu0B7775EU8+c4VWMwekVyqaGT/96y9z/ftvih/Y8ICiuYTO2iidorTMUCSRAI6fO8mNDwZw6NnfPGDy2VMsZk2cmwLIfEobOeC8jXwwcVCQ6awmPdZmsd3hp84XjE6d5N2NLV77nR/xbL/L+ecvUpxdxVdBynUlpbfyCuU0KhciJXWgffwkB8M9jvXPcLLfI92bypA3JiXIyhCL3hAip1Dum4/hpv/z5/HYLOqRx5ZQpeO0KHj2ul3ePjzgpTIQCkOWaNLU08wSDkobWb5GjO1CgDC7MWY5g1J6ETl6CplFOGfJ0gRfW27eucmPd25z6eRZ/tqxM2RJio7hR0ppvHNz1xlnrRhMIxT12pckKsGoFEm+DfPNGqxl52Cfb733KheffZoXzl+NzFjHvcG2vGU+MK0t49IyqSxlZXGHI9JEVJ+1dQwnNd989QFH44pOvyHoXqZ4eFQymtZUteeYdfScp2gUjK2ludyTXi0Rs78i9oNBiaHhrF9YWe6xcf0u1996n9MXTtNdXiVtdEFnNHs9rnz+KoO9TTBLZM0+6JSgErltXS0LUGmc95y59CS/+Y9+yHt31jl59TRLa03xXfMOHUttIUaWJLoApeIhKRsnaWY0z/eYvrdHR2u+eO40w9PH+GB7h7e/8xZPfL/g4pNnyc6soDup2CelCq+9vKch4CvP4tnj3Js8RGnFaiMnVYFKOExi2Djj3io+Nm/5pAmLPI/HZgGBC5XQ2mcsaW0MjsB3+z2e25uQpwvY4OkvF+R5yv3NEXsTG+eFXhSJyO2kg+gcrLUIX09Hx8ogE/+g2N/Z4ZW771M1Cv7CMy+x1Gih0xjPRiRNOglJ8jaa8yUSV4Gy1HVNmuQkZsYhC7NuFuUsVVnx+x+8QtlPeebUBWZZ9rWrOXCT+aDT1Y7J2FJbCSpNEubWTpV1vHXrgFsPhyRZSpYZQeJ0Qu1hf+poDhythiXNK/Hz1YrVRoPaDoRLlyVoE6n6yLRbmxSFBgXPf+l5nPfUk4rB3jZa7aN0QFGjVcrC2jkpT5QmOI8y0aRdC/09BIHXdWL46qdOcOPD6yzpI7Q+Jo4xtoJo7K4U6DSfJxeEMIP9Ba7PznSx94+gFEOJdsj43PGT2LMneH9rj9/48DoX3r3F05+5THFplaANOojPgvZg60DR6zM+uEttS5rK0E00u7XHe0+aJBHUkRttFjGRKEPl/hwkf4HcJnPtg2L+RqLgQavDrd0hV2uLKVK085ilnDNaw/qA/UmN0dE6PLKBnfeRLlJTmCTOYFw0raj54MFNrg12eO7UBc4vrpIZIWgqM0uCcuBAu0cDLKFvB3xt8a4mSYW6okAc9GOaWKhrcIEPPngHn6f8lee/QFIYgpFSpLQVg2pKcFIiKgWH45LDYUpRaIqQMLU1QWvWdye8eWMPnWqanYwQPZanoxJrxd526jzjaU0xTUCVYsYQET+ltMgZ0pmeZAaCCC1mJoBLlSLPi4iOpYTIf7PVdO6ppo1GWG4qHg7xDQohTtMDZ04v8ZmTKVuHlrMrSug9WkRVosR2aO2x9UQGnd6IhVUEQ7JjbcpmhkrBVRblFFQSj/7s2hKXlrrc2j/iW9fucOH2Dmc+d4HiRBvlwY8qVJ5R9Ntkwwaj6Yheq8dqnrBn69g9RXg4zGZ7YS4r1p9Mjn98NoueXcd4HlWWiLGAMvxoocPVwwmqEM8g4xytVsLJ5SZ+a8iojtBkABMCznm0D7SzDB0DgJQL7Ozv8YPN6yx2+vylq5+lkaakxmCNFmAhBJlNRK6Qi+VDqC3eeryv8cGRpjkmz0VY5r2YW4cIQfrA+tY6b4zX+eXP/yyLnV7UiHuJY7ATahweH2P5xJy89FDWHo3DB8X+sOKH729T28CZk11ccDgVvZ9ViLJYEXGOK0+jdKAk/co6T1VWGFVHqDWRBDNm7iwaTCI8uZjxSITfPVFx6EGb4hENPhYtWps5kKKU0IF8kHzOvJGRWcvR5gGTM6dIjNwa4hHgQZl4CCoINd5PQUmIkEJ0+clyg+rWIcHHwaVWqKhpKUzKMyvLVAsLbAwO+cZb1+jfb/P0pRO0mwUmyzBpQr/d52B0QKfZ5nSr4IOxiHp9HB0opQSIUYI2Ou/nngk/6XksNotiRmaTG2XG51HEeDQVuN5us7/7kCU/IzfKsK3dS1ktG9zfHlMHkZkqPEmQpCcTDTBsafnx+nW2/ZSXz1zmWBRV6STBRpwdwNVOjLxVpHjE8sopT6AmBEeaFdLLRB8tPROX+UAIjuHBIb9z+xU+/+QLrLS64ASuVji8r9mZHFLVlpm/WSBgnWNS1kynYgVbWcebH+4ymjhOrTbpNRXTSlM5z8ONDXa3N1g6tUjwEl09LD1mUKJNQlk5ClsxnZSkiSPJUgwOb4NkrSglG0fp6PkrX4e4nDD3D5jpOySPcUYj1HNlqELhnCVJUkKkFqWZ4fz506we7+OqfSj6cZQifjtSjsmGC64m2CnapDhbYhKD8w6z0oC7AxHZWR/b1wClvKfW1hirOJG2WDnV4mE55huvfcjJboPL58/QMtAqOmzsbHBm9SzHikwOS63EmF0RpcgBVPx6YB5B8pOex4ZICUDk78iAKNbE0XJnaBTXco2vKrwWm1ZvaxLv6HcN3dwQXEAFJwGsBIyHUAfu72zyG3ffoGg1+MVLn2Ktu4iKTbz3AeVkDhIbp3iSRhmqD/iyxk1LQnDkaYMkSSNQ8MgWNv6K6XDMd979IU9d/BRPHD+PN0ryL2NYjFaKremR3JixrFMRyZuUjknlGZWWd24f8XB/zLHlBmeONUlUIFGK/d0xNz5a5+H6HjsbeyilqerAwbhmUAaORhXDYUVd1gwHU6pKbhhXC6tAUPQZzUPo6YC8zknCfFYYnEDtopsGFEonKGVi+RSjCHWCs7XcHMGjNbz0F7/M2YtnGO6OcD6JqlQvtyuzBABp7kM9wldDlJ+ilQPv0P1CZmKZmdtb4aP2JfjYhymUSUiAtbzg+V6Xav+Q7964zvW7t2nlbfaGRxilWclziliaai2McRVC7DVnxZmfVf0/8Xksbpa44hBUJKoT44ufJDGkKEn4sFPwhUGNzxKBL2xA20DmPd3ccnBU4YOekwmHwxGvbtwktHJ++tzTLDYaOG1QJoHIPpYSQgRCotmPIiAF1LLxXF2jlQSx+rmPsvQAc6aq97hatCnqxAovnrmKSTMZ4MW3wFuHcyVbk0O5vSIRcIbxW6UZWcftOwNubw7otTOunurgnWSi1NayvrEvKc7BUU+nuNoytpbaJJTVhNJ6BoOadDjETgcsWQl61UrRaDaEKqLSef0uRjDicqMwhHR2+iMb3Ft0msfSWPpI70USoYIiBBu5eJL7qKNzjtGaxaUlDg8n9LvipqKjpFeBhBhZmI5GJKlBp4VIpdMGqqPQzYRQh0jNVxLB7RSu9jJkDqCTiIANHQfDKatf/yIvf+WzjMoxoCiritpZullC2yQcOmFP15EbFt9A4azFQ+uTupbHY7NEJCyoRzs70ULH9yHE5kux0Wwy3tmlUWUiErJx4OgszUJTJIHaS39y7eFDro12eW7tFOe7C+JbZZK5p62UzuLvq0hlKWgDQegY1OIJVnsrZhhJSphlq6kZ/VK+bu8k2fijDz/gXlHzK09/EZOlc6QnhOiWGBylrdivJzLzUYo6yKa1taPynvubI9a3hxSZ4eq5PkorqtJTVRZrPXXt4qITDYv1QQbhtcPViofTMcORJVscsbs/BNMizxLywpBYcbKfFbxGS58Wd5N8T3GAGyL9aDYw9rM+QwnV3ZhYLs+PYh9Ls2gcbiythTaD+7vU7TbGD0R5SpSCqxR0RlV5fF2SFBVJIulEKlGo3OCHYoKujI9EvECSi5LSVw6scAkf+kPUX7pCcXmFuwe3aKQNTvXPkaU5k2pKMytYzBP2htNoKhIPh1g+q9n3lPx5yGdRcvK5OXV61qvIm2fiMOkgMRwoRz4pMUUKlUN5YZo2Q8rJdot3723xysEGvbzgZ09cptVuCcM13gISNSGOw0QPMI0nUU42pRYUKVgr6V1pgkoTfJJIME6UKBtPJG4qQlWzu7PNqwf3+IWXv0SW5aB1nEMEtErwvsb5momtmNhabJ2i9Y6O/fbe/pSDwxGpMVw61aOVasrKUteOsrKMp1UsUzVJkpJkqZy0qZYNo8B7zdSLXv1wWLI3yOg0LVlakhjZIEprTFZgrUNJ9JWUvWo2gNCxVxEj8Tm1PURUCyFhhuDm86bZ6RyCjQeI3IbNVoN33z/g2SsBR4wlNAlaJShtMGnOcHefRq+GYAm+llClpsEfxl5SK3zlHgm4UlnwlavYbRnO/vW/QrPfwuhUykSk5Oq1uuwe7dFaOclaprkOsqdjeekj3SnEUt9/grMLPC49SxA0RSbronmXK1Ixo96jFNYY9tsFKliCFYM4X9YwsbA3waZD1tN9rnYWeGn1LK12E0wibGMtkCdReWeiQYHRwhXyM8jHeaicIE15Kiex9IGR0Uqc3IuJdrCWyXDIN2+8xeeeeY6V7qKc2HUkewZZTM5LQz90FZWLiyY20yEEqtJycDhGK82JlRZFpqSHKS2DaUVC4OUnz/CVz15kcUGUi3mS0G1lzFz0k1QL5aPyZGlKVcNg7DgYVkzGNeW0pqpK6mpCPTqaG5nLfHJGZZ99XQna5KAMzlrReyDsBblN3PymIc5vtEx9mUm/Q/D0ei2yQjGtFb4uIc7PZyTGyioGh2NmkRezBREaWrInMw2FkWiKSJHx3mO15XYDDj97Bp9pHu7vsnO0w97RFre2buKCZam7wMOdbZTSHG80mCUlPwKQmMcmCvXmf6EG/3+NR9CXqJcn6qGDNF1GJ49M95RhJ09gWIK3grWOPd6V7F7WGNXjL3e6jMea3SPLtPYYLXn3PmhxntRGHAudNKMuuvQrhCBIkCbSpIbEiP6bOMCaUTtmDb0KCl9XfP+DNzl56SJPrJ0Ss+ta5HxBzYRQHmdrkiRhHGwsAQX2xckCnEwrjFb0+wVFppmUlkrDpKoZjWo+s7bIX/jSkySdBX7tr36Rb33nbbav3+bXXzzBOzuOb7//AJ9IKK2ylsHdQ6oQGE5qBhPDKFc0i4Q002RpxmBnh6W8SZI35zMH572Y38VNo7WUxtpolE7irORjiWcgSFmkJwU/I14IDBsCBDynTzX5YDjgmbwk06Bmpw8QgiLNUumlnBwKLniSZsLYCkOA2goDI5EN7ZTn7nTKsV/5Gjc2b3Nt/TrHF4+zub+FNpp+u8O0HrHQXeDd2zdBBY43G2Ra4+KGmJmby+xJxSrmz8mcZUaUm+U5KnzkK7q4kaRE2GukGF3jBjV2NOVoVTFqaJbrJp20SehB0XHkLc/GdsXISsRDpFbOZxMqcoEc8aR0Du1lw+gkkSBXGTTIv7Qyh5ApttinUjveufYuo8UGP3PuKjrmX4ZK8gzxsnCcr9FRnVlGKYBz0f0wzBpjaDQzcqMpS4Gkg3dUtWM89bz57h5PLL/Pxa8+z2K/za/84sscbD9LsvOAF59e4/qDQzaGQ6ySQdvBnX2qXs0kUxweaXpFQmtqyXJNntV0ji0RvMW5Gh1LMWOSjzW8Uc6rtMxJYq0o/8ZFqYKK8xEV9UNBND3iRhh3TmCh06Y3PmLoUnq2FgKjSQk4UhNotxsQe0bva4GmE03aSYW+5zRMPd4GQhJYH1me/rf/Gnm34NTqcWEPaB03sAjVtDYY7TgajbHW0U00Da2ZSDU5+9Kij9is4vjkQuvxKMOQul1FdxYfZCEnCGyo1aNJ8UGWEnIY+zH3jtW4qefUMKet8nhaeZI0JWukpIkIe2YfEzXLnvR4JREDzOrWIGiVSVIh2IHMWAjinxtzO7yroa6htjx4cIv33CE/+/RzZEUWM2N8BMeirVIQUwiCDCurshRAIDw6IGYZlwb5QmrrKWtH7QNlVRN8xlY55fe+8xE3v/Ma1XCPgGfh+DLNSxfQyvGlz12hrqWk0kZTVTW1c9R1YFw5DiYVg6llPHVU1svNHRwqxKAlb4FZLqaC2VQ9xBofI+TJ+D4I41qg6ICPwAhx4yeEMNtksnHONjMSk+Nn1BdfiTVUKfMVuW7lWPPeoVJFXVmYRvmwk/5lY/OA/teeZmyO2B9ts3V4l/3RJntHGwynuxyNd9gbbnAw3iIxhsQYpnVFQ2n6uZk7TkrlNft1iDfNn4ubRfI5IHpDgbBkZ9oTxAzPGMPW/oi74wckHk4Ol2h224RM0sJsCEwnnqNBzf7hhMoFfBALnVmZN1uczskcIThRVzrv0JnBKyXOMf4RO1mFAEk0ZnVyMw339/n+1i2+/PyLdBtdaRZ9nHEnCd45fIy/I0QTch9kamxEvSlzB/DekQRD7Ty1h0SJ4blGYR2EYCirkpsu4ZXrI1auDkgafYIbYoomwY44e6bDcrPJbjnFNBJ8qrC1p64c47HisEjoNBydpmcyqUhSTZJmWFeDEg8zYoCsWM/K0gjoiJQZVBwmWl9JIy+hIBHNFEKl1oqAASVuklppfFDkzUX8cIC3u/Kaa4VSnulwSKubE3ygno5RLaH9ewLKgFc+zn00R8MJ2YtPsXbxDEZJlGAjbZClBZUtSXQa2QXRZ0wZ+s02B4MjWssrLKeG9amdrwFm5X9chTOd3k96HpubZfZTcLJIjDGiqxbKEYkxmP0dtnYPeGvhBGuXL1CkKeW04nB/zMONI65f2+XmzQPWt48Y14HKIadgeMQqddbL5xCyFz7IsExF4ViQaSjGKFTkhwlVzBNqS7COejrh27fe5uqlJzjdX5HN5HyUqmqcq+X70QqnHCbPZACKOMTLsI85XKsTSUx23jOtLTYErPVUVY3zGussSbuNXjnG5v6Uw/UB2JEg2DoDU2Co+PynL+Gcx6eK9HRPjAg1lLVlMqmZlDXD4YQ6bqK6rGU+5GqsreICkqEjcZGjFVrFqA+EJR1CwNYl5XiIq0txA42LrjzaJ7haylXixzNGTCU4lMGjkhu6mk4Y7B+JRRKQZnkk02qSViYLP2pPbLBsZ4rl589zMDhkWE7ZHe5QuZL94QEHo212h9vsDXfQmDhHgeOrK2wf7BLQHG8UmNirqFjNBITXN3MJ+qTnsbhZZg1+QG7j+fWIfP3aRCi5v0hjcZnvBs/BwRHPfPQQExSlynHRNiRJU7w2kASCMqBnzh5yZ6mYoeKdJ3grNXeknOvooesUaASOjR45wpuyHqzjjQ/fwayu8umT52TiraI7iAuoVCQCM1BCozHezP3HEpWKMVwMi7V1TWKEcuJ9wCUKrxNq78CDCwobAs3FBVRlefJijxNPHkMnBcpkaJORdteojh7w9JU+9zdP8t33bpOcy2FfELs6KGofIrpmmE5rms1MlIzeiREeM6RollHyqHfxnlnnPr+dUZqb737I2cunafQXCTi8Tsi6SwI7BzAmRYUgSkkCSdrE+m0pZQkMdgfUEyucVeuir5uev45MBVQJmWJr+5DFX3qe/fEBrbwBQKdokZkC66bkaYeAppEVAkAgI4C1hSXeuf0RRimOFWk0t9DzMszMcHv1SUoWef7UmyXat74KPAgh/OKfdfLXvHcg6k+Y1ZKzEVoQtaJSmJBwbXGB+199guO//SZnbCDJcpI0pQqSiR6cdHFe63m2ZJwSErxo2lFgklhfw9wONi6VR19eEHsknOf+1jo3Tc2vXLwSB3Mx8Zgg7iLOih4fR7BywupE6OchxvzNYvxUIqWCsx4dhGBoK0eVGMnQNApXBZG9BrBecf/2IUd3t+ifj0G1mbj866xFGOzyC1+7CgquHRwQnKOsFEmRUpaWcW0Z10Z0M6Ulz4WTJnmc0dDOGDFJl4+KIkqBldiAuJjYTPCcvnyKpMhRXlxxCA4VvQ+EgOEJuLjJNEnW5+hwk04jxdYj7t9apxxbxpOStFkQ6hprSpklFQm0FEwV49GUxnOf4uLlZyJXbzZQFkpSkbTnTISP9x2BQL/dZX94iAuOxTQlVwiHMJIp1exfBZGEqE+4Xf51yrD/I/D+x/7/P0GSvy4B+0jiF3ws+Qv4T+Pf++RHyRdi5h3vH/njWGOKvxhRIgvT1UXu//oXeG8tYVgO5CZCWBrWh7ka0rsaCeNUMlsI0g8lM01HBACCjhTuiMKpmfYFBbVlMDjiu1u3+JknnqORNJlFrAU1G6DG00kH8DOOmsH5Wm4VrSmS9NHB4OPUO85aAnKDit2sDO8IosPx3mOrikY7Z7AzpJ6WmKSQJtvbODPxqGrCz3/1Gc4tdOZmcpWtqerAZFwznVpGgynjSUVZ18IZi/JomXPITTv/oR/NN6Q5Fv6dszXGKHGhd9LDGKOR+GELyqGIPmIU1PTZT9YYrXyR9TsNqsmUrXt7TCuHLWf5m9GzWgV0bjB5Qgie/Upx8gtP4eTLoLYlEWqUnkMrxuUEsfJ1jMshtS0JIVDEwa11nlaW0kuj/W6crkHkIUaO2Cc9f9owo1PAXwL+b8B/oGT7fZ0/w+QvYhk2y6j/w+iE/Kwjn2eWH6kUuEbO4Bc+y0c/usbp1x+w2l9Gp0LwC8GRGBNziBwKj9MBVIjqxpn8OH6KuE91kHJrFkMQvMfWNd+8/TbPXXmatU5fLkMR7uOUzIOEjKci7czJ7a6NANQRUGgluQQPOYcKGh+czCkigq0iNI0O2ErAAqU106MhL5zr8df+rRfJ8wXpG8ZD4cE1m4BGmYBKNCY1/OzLz/L+2w/YPByhlOSVlLViOK4pUkU6GJPlhkRDmqcYZYBUoFsFkMp7EAKzdC2llZSQGvBWFqd1Ej5kUmxdQoTcTZJhVYYyPUqdkWoJSe2mhrdZRK1/yGB/QLayiK0cNiJ0II22Dw5faIY7Y+xTpzksB9y5e5/jC2t8583XWVte4PTqcW48uENRpAzLilaRcWr1FPuDPbSyNLKCp09/mmbR5HA8YKndY8lojrQnRFZHxFlRGvzcR+6Pf/60Zdj/E/gPgU78/yX+DJO/mqtrCHwsqJWffRPxzUpnE/MZaTHOSTyKJNLopy8/ye2lNpNvfMjZhTVQYpANjzZgjSUoQxaFYrX3UZMvpgUqCEUjRGjX+4gMOMePbr1L79gaTy4dExJg7GW8B50ZGULGhRWsOJcITYdIwRCTt0JnJEpjI/+KWUmhgrAJggxKU22EFJomVJMpWZJyuDNl94M7nPjMssyCGg3QDfz4IZ6ATjLRpriaXqfg3/ubP8v/+7/9HfYnY5zzjCYW4z1ppkkSRSMbk+qAMYpmVxN0ggmKJNOCaAWN1vJjZmColGiFAo66roS9qzTKVDK49A6nctZtizNNicxoxllMZh0NNN1eh8M7clDV1mGdipJtcezRmQEsOk94OJ2w208wO7vsHOzTyJqcOn4M7yu0URxfXMMkmjPNJllq6Dd6nFhco64l2cwHz2J3kY3tbZY6PVaLlFsTCY8KYZZ+7eUwiHOaf+PNopT6RWArhPBjpdRX/6S//6d9Pp78tXzlydBMpCn0MC891Wy6ijSVOjyaLM8ippV3UqY4j79ymu1+m/pfvMa5xqJQXXA442RxayP2RfjoYxybc+/iFFv4UKmXnkXmD45b63e4p2t+5ewVTJ7io3zAV2LHE2yY0/VBjPl0ojAmjVoQEU2ZJKGpCwz60ZwlqPlk3Mc3KngFVmOVJ5QVtqohSbm7fci7r63T6Re0T56G9iqJKfDeM7xzg8ZiD91ZkGa7rjh2rMPf+htf4x/8zjuoNKXbzun2miyfXKORaB4Gx77xLATNKZvSTwSlcs7Gqb2gg9HhWRAvpSBY6smY6bhEEUitFUd+5SnTBX5rqPnRR2O+eHaHJ5Y6nC8KGlpRjCy7/91rPP2rT3Hz6gs0br8tacUuxOStiKVE+cC0qTj5i1/hwqkFFlo9nj5/Ka4dD2jev3uLRpGztrhMajKG0wHbg20SlXJ84QRJ5JKdWlnmo3t3ePr8JU42G4S9ITpEqkvsThWIS/8nrNk/zc3yReCXlVK/ABRAF/h7/Bkmf+VacaZImDrP0HomHizEvA4X+xQT3Th8/CYRb+M4M0kiMc6vLjD4tZf56J//kCt6gXoWaWA0KcI9s37W6AeMBhN8DCMgWuv4mCYVOBoO+M7eXX7h2c9S5AUecZDxwvAEIyo+HyzKJARrpYcxGS5YKW/CDAKADEMjSRlWE8Bg0sDMG9jFDxmUp6wmJGnBdDIlzTKcqzG5wRrFrffX6e8c0ezcZPHcSY5u3WXz+ianP21IGm0p3RKZj5w9d4yfetHzP75yl/bSImqhwwelZWoVV5cWMEXObpoySBJeai/w7uSQZ3WBDVOqqqaTGqEIxfcgOABHNR5zuDPAB2g0C8o7nvGrO1z6313hyW7NXeP4zfeG/Lba59xKyv/hieOMXrnF1nc/ZPj+Ov2vXqB/5UX89vv4ELC1lGGi/Re42JzocO3WQwav3sT5miQ31HVNq9nh/IlTvPbh+7TbTbL1Gzx1+gnWD3aYTA/otRr02h06podSiqVunx8O3kZrxVKWSq5ocPHzyHjCeYcO+g8BO//amyWE8B8D/zFAvFn+TyGEv6mU+if8GSV/aSVBQ8pEt0evGDuPDTFaIKJlejbtVo8YoxFCi9VmQIdA3esQfvXzfPDPvs8l26XQBUlkGc90FwCJ1nhn8TqJdXiE2gMoo6gmJd+49y4vXX6S1U4fQpyJzHqZZEZzj+xjL2iRVoYEUeQJgqLFFUZJ7mWhU0yiI3w9+yYUaRxmosDkGltKGalTDdYzHU3ZPBzz45v3GY9rjqWOl547TkOnJFZJ/1Bb0jQQdAIqRZmMl55a4/c/3GKrdGx/eMjmzpT+023OnzjOvnU82e2y0JCIc5u22TMNFnSP10aHvEzO9IMHZH1DfjwH5fDWMT0ac7A5xj00lDe9mHsnhhvZ9zj5t15i62DK/u4h1bRm77blXkcz+v3XcV1PubmNu91g/4vPsDK6hY7olKQfSMmmtaHTatLOKlbXznIwOCQxOd1mQbfdpdvo8BdfXqTdaILyZCbn3LGTc/BkNpgMIdBqNCmtxXlPN89oZ4ZR7SJwFPBBzeH7P4ue5Y97/iP+jJK/omKUqbc4ZdBE+DcOWmbYvo83QQizXkN6mlmzrwI4ZJLuWw3qX/0iN/7HH3Cl1Og8QSmhlCdKzx0kZ0gU1omqJfZLPsBr6zdZWFnmiaWTEBtCfBABmH6kpAzeRZaBmFTPciUh4JSeN80qyPCxmzVZL/fndBfgEWQZbzwfPLV1QuZ0FhVqKMfc2tNc/MzTnFxb4L3vvc7DwwkXEg9pxnj3CFM00I2O6My1RqmENAv88qe7/P3vvsW71wsY5Xz6dIu7oyn9ImfqLHcmY042mjybN3i3qjhWNPlqq48yio039ln/l2+RHitoXGnRutTAbeecal3Au5rhdJtxe0SeZozv7/P+2w9499ZDnPMUnRZZt8u/Osz49a88z+Cfvk+9ZAkPjvhUOyFc/SzT9TdEGFdLxqfzFkxCkhqef+oSmD7jejIfNlZuitYeV084GI1p5B3WD9bpFz1KO0YE3IFEG/Iko99cIU1SDkcDFto9+olmXMdaIt7mgjx/Mjj8rxvA+k3gm/HXN/kzSv4KiK+T9Zo6yI0SP1gcGoUYI23mNGvhJsmkWYzBZeMYZQg4rIfQzFF/5SWu/9MfcHUKqpFHqncgszKQlKGjoFheRSawMazvbLLhJ/zSmecxqcH6GJUXTadC3Gh+xo8KQeDUNGrUXUSSnGjIhfYh9JbFtCXljFfz7182WJjD2JVzpGlKXVYQNKoc4csRu2qJZ579NEvHVvlqt02/2iCdwODGbVSAajgmX7FxluQwyqKD4/LJNk+dPcY2ms03R3z43S16icPljp87fYxO3mAjzymyBnmi+eCVW/CjDRaeO4EfTAmpYbpXcvTNIw5+NGU5aXDmdJvt/+gLmN0pvX/0DgufOUXv+VME7XmqTnk4lgOp14CWfkD2pTGr559h8kHJ5jfeJ98c0HvqPLfvv09WSYNv64osswRnCTphMNzgo8P79JsdNvZ3OLdyEhumeG/IEx1tdT2dtClJayGQpRnNTPzVUAqlDcf6C+wcHLDY7nEsz9mIWS4z5FWImBFA+gnPYzHBD3F+Mnu8D9EMTkobhZIT1sdsjjjkUvE0F9wvzBt9Zsle3uMbBdWvvMSH//33uOpWMWiMmbmRxCtfKSpb472jleUcDYb8YPsOX7/8NFmaYWdDLK1xLtr+zBC5qHiUYCXQKpppRNmwyKMFQZIbDRa1CNIkOs/FwWhsbGPp4AlYLOBQPlBPS2rrOLG8TNFZoN3sceKpS7QRh8j704rxvT2KpQ7eVphMSKVhlmGJ5+go59xCzkhvcfKk4sJSm+WVPpdOHGecJKzlBYXWnAjw6r/8LvbmDtUHm5h2RnGpA6VHbSiWioLTv/ocjbTBcDhE9XNaFzrs/9b77P7uexQXFvmLv/4kP1w/5OaHE55/puAXFzSJvYs5eUjnzAnaF15msDfm2/sVx1ev4Ic3wQWJLHQ1Oi3wKtBtNXiqtUyaNDm5cJzUJFhXo03KaDoRBoJKSIs2w8kIdI2znp1yRJoYFltraBTHlpd5sPWQK6fEwEIdjFBGz0GAmeTgk4aSj8dmCYEqQO1n5m2xgVbRsEIJYVxo13LKP2ILh/nHeORxbKLJdwwh6rTwv/Q57vzTVzjfXROVojFoHyTjxSiSoMS4zjr+4N4HPHfiLAvNluSMzGyaIp8sRIoLsfxy0TlfJRqPJtUpwcr/E2qCjZNxJ5ugo3KM1jgfSX3xpFBa3C/nF6sKJIXBlxUBj0ka9Lt9unlByxSkuoNWY1SwnP35L7D147fxgxKqEvJSPoYG3ISjvRG7m4ds4mg/t8io1eStsuTsoGTwYJsqJIzQPNHp89Jih0tfeJr9++/itqZkz3VprTWZbE5wd4Z4D4vPnKEIhhN/7zsc/7svk//cVXYvdXjwcIMPd7b43n9/nfsfbKKOFL/1/gK7VwuurB5wvhOoeJ/F9lfZI+Xh4SLLqo2zjqr25M6TOEkMMFmK1oEHD25wc29Ep1HEEEPPan8RrVLKesJip8e0ntJMC3pFh1E1pZW35kPqEDyrvUXevH6doGA5z4QdoaLB+6wvloX0E9fp47FZkIk7RHr4jA4SiCewwHzWB6yz0VVkhv07DHFfzWTB6tHHCN6D0dRLfUZfucreN2+y3F8kiTLYaHMvFBtjePXuR7S7HS4urko/4RyodJ5HSaTOKIX0Lg68r0FHDb8OeGqUCYQgsLIXOy+pi4Pj8PAwfm8KFUM/QywFQ1QYokUam6RGvh/nUL4kKVIu9nqstVqocgK6gXMVClh65jLTjQciIbAl3llMZjBace3BkPWBZeP+EaoVyC4ams2C3/mDmwzKQCiamEbOajfHnMxoPNzHfdYw+eER2fdKOmf6dNodWo0+VWk5vL7JzuubHAwPWUhHvLa+zjfu32NjY8zGfs3TT5zlK195kf5Bi9+8dsDPfr5kp3eSC9kNmsGjkuN096e8urdP4+oq9WGBsx5b1SS5JWtERSVwbLHJ8so5pnVNM2/SzHK0NiQmnfsVB0QBqxR0GhCCkgMq0oy67TZlXRKcp2MMLSPmIImRG17NGOaP+80CkEV1pAvSNyRzIU4g0UosVFU0OyA25Uri7GQRygYJhGg7pOaT59mt4584y4Nbm/Q2xiRFC0XApKmUVkZxd2eTDTvi5889OzcE1xE8CEEyW1Sk44TgI9Uc3EzsFctJkXeJnaj0ObIhZ9/bvZ1dzEKCqmSmK4fZo9tUzSgFUb2YFvk8KatG084aZImhGk4JupRBZHyt0qU+uujgS4eiwk5LlA78q+895P4HR5hTHdaOJ6y24Gh0xKefafOjV44oncWkLc6cWOCpL57m/n/+Hfx7+7QuL1Pd2+Pg+gY7+UPM2MuJfu0exz93guWvHWe7nfKRarJ88Rlu6IrmcYNZzmmvFhxQoxYWeOUh/FLf84E7y1nVpat6qHM5K124tltxRWfU1ZS6tqTWYutKSuYko50rdNFFqwLRLP3PF7SKszgQJMz7SMKM731qUhSGwWREr93hiWbKfg0HVkwdGzow8vF1/wnPY0HRD8DUOSrvcPGbUzMoOFK9xYUx9ilqpuN2EPk9PjghTMYFrtWjhFoJeYpS4K99mtv1QLy0vKd24pU8HAx5dfc2X7vwFEWaRZd3gaRrF/2ugo9CLuLQVLyUfYgalhghpZlNV6PV0NxAUFGOp+w5x8m0G5vLGdQZdRWR0PiI9ymAQpKnuFbBIE24tXebo9Em5XSf6mgL6omQNk2KyVso3UAXTQEavMX5gF0w9K8UrLgBJ97d5YQv+OLZ05zoNam8lQRg63lvt+Q/e22HlX/vS6y+eJns+pSTT12i31yh316k312gMBnu5g6vnyn4veM5v+Vr8m6fnz95jMS00Cbn3pFm/SNLt2841XW8+aDkt14rcRue//tvD/jtD9/kqNqjAv6HP3iPddcgmATrAt5KiloIdj4wxk0I+LnkeYZizofTcafMNseMdQAzeyY4sbLIztEBCsVinkrceZhZIM2JLz9xnT42N4sIvmShiToxPJqwhrg+4xBSRhNybYaP68VDmL+IegYa6NmUVpARXxSUX3uSvd98n+X+EoSA9ZbvrN/khTOX6KRZHFDKV+KcFaa+in0LSM+Em9PstdGo2srnCvJDIzQK52pQcVrsHNtHByz2eyy0U25Mt6JxhlBcZltqNvXHAqlM+b0Hk+WYPOPN6x9i1xZYq8Z0Jtt0T3qSVk8YvzqTWy/2exBwrmZtWKPTNqM7Q8rRlO3fex977yGHCwuEyVgYC2NF9XDKGwdb/I7+kL/yt0F/7TRJr4l9O3B454Ci22b5hGHpxB5v3Rnw/NXz3M/lFvSlo17fpBxNyUPJOwdDPlI1nzrXpJFWrDzRY7l9mX/nqbM82N+naGmaWcrzX3yW4+FNlDFR1epxtsKkKUoLgBPcAJIes0rBfwz2n2VFzg7ZMDtkwyN4OITASrfP7fV1Lhw/RaJTtqKVlgKGHycH/oTnsdgsMr7wc3O7j5dZLnghPMKMMIbyfj6MFPpF/P34nUpjL4TL4MUPaqZWCABXzrD95l06OyNMXvDeYIf+Yo8z7R5iNSeeYUaL04w3CuflbUh1MkfSpM8QzQoKETH5QDCzNykabQRFsOKHfONgj0vnTpG2IGyGqM6b2YeKGE0WuECbRiu8kfQyW0OoKu7c3mWyv8slE1je3+OCtzRXK5JeP9LMRYeC99hyilcGN7VMbz3EjiYE5zna2aZWQ9LPKJJ6QF1Pcbbm3IVF/v2f3uHpsyWNZpveZzxlWKJ/5SXSac3WP36LpZe3cGqLrz/o8cZm4HubD/G3R3zrjuWkCjwxHtPtNbC15npngVtVj//wa4E/0F0We56xanFtfcBXOo7ivS2mNw/x5zSYgKslGjyzLvLtpJdTYSxsDmb0p0c3ycf7jBkZU4R2Zq6IVRERe+ujGxBgMUux3mFMNof+9exU/gnPY7FZgD+80EOcO8QN42fNb3xhTIzUE4QK4XrN7ECj630IQPSgDt5FMwJBUqwC9dPPsv1ffpvy6IgdP+Bra1elH3IVtVXCI4scMufc/LTyXpKIjTbo4DCJIlXRfywSD4NzhBC9jUNkVgWP9Z4tar7Q7VNrS25SplEI5YkOKy6+2d4TjAAYSYhMZB+oBkeU4wOK0GTLwEfvbKA9nCo9XQJJuy20mzhncc5DkrLnNEeTCmXGqNSxeukUJz5zkvcCPHP5DPevaaZ5l5d++STPntqkqe6Bqgl+ws1ywqvD+zxhU9J3bpJ/5gKts4ETjTbYJpPuCb6VOVafbfK3tw9IXrlPs7VM43Oneem1e1zfsbz50UmOPd3jTm2ptePL55u8Oxzy2UsrLBWWRjWU0hlRpQqp0qGMR5sEfAVuhE66kcPn5+jnzId5dnDOUDBr7aP5iYJ+u8u4muCCpZtmLOYpVSx7BTAKM/HqH/s8NpsFJTF3PjbBykWT6tlMQytUmLF445yEj8PF0kPEbp9ZtEA8h0AJ7dvMXtDlHrvPneT+v/weL527QKhqKj+RFN80QesgfVIEC1T8eC74uLQ9nW5GURSUR3VE5aSHEa16mPsIx0uQg+mIZrNBYRIyY+ikDSZVKfoXIoITXUpMopntbxfz4VEwHQ6ZTkZMCnC9JttTz7feuMdzgwmXphWLZ5YxnSYmTdEEsnaXoFLWVgr29TJpdobljw7pTFKa1y0/8/IZfrNZsXKh4v57G7z99g7/4O0Rp5LXGVaQqSFVXUF2hqr7KTbXpmz+k7dZ6xfkxzL6p3f4tYtn2N2fcubDdfZ+eIO6tqi9I7IPdyjHI/be2sRt93jzRznbexP2dIunP3eBlaWME8mE3lLKdHNISERG7J2PpVgteTmk8v65IUG15ofmxzfJrJchMiVmr/oM5AFp8hNlGE2m9Jodjuc5e2WF0Xo+v/ukJv6x2CyKGdLwyG9W2g0prWaO5zO4ddanBHgU3Bqv0NlJotUsvEcGgzq6vAcgNVr6h1XFiX//F9i9scPRvT16JfS6HdJ2iyTStkkkP8R7R0K0aQpyQ7WaGa2mweaa/AiyZsZwZ8RwXOI8BOeEfxZqggvcP9zndLsnsd86YSnvsDXalzt1PtgM8fuVGGytRXWZaKGbKzvFlRPKKqXX7fPCCydZv7XPGx9tMh1NeCY4+qeXCO2CYBJMmlHXgcHuHt2pmNdtJ0fcGpe4Ow8Y3n2Hg7LCtwxpq8Hed3v8d/uan//CFX75p1Z4r3GK02bM+XafbX+cYy9mnAuB4f09dj7YYGdzlzfe+za9D7aoPzxka7lBcSpncm9CdesBo9IyCQ6zr/jsV89iV89zzy3yleNdbr55j8PxhIPBkI6pIddYa4Wt7UTSEJwjaMm1CW5Ikh2TEvtjtkViKRXf+zgoho+thZk1LbDU67O5t0Ov1aGjFUcaGgoyLUbnn7QhHovNAkhpFenysu6Fhk0IJJFi8uiGFKVbMnfc1zEUSBys/LyfiUnFkc0qlPzYGHrHsSeukjUaqGfO46ua/Z0jtq+tk97YojN2LLbaFO2m9AGIfZVSTnB5FzAmwRhxpDGdnERD1s9IdGA4KglGmnrrxHPs/mjET60dE71OgNWsxfs8yr/UxmDrShAxL+WI9WAS8VMmeAqjmKBw1nF3c0JR1Zy7uIw51WH3zh737+yTNwxJ3SRb7OPKkt39gEsa5M1Ao9dg8MQKt+5vwcMJduxR6SInXjhD8eklluwih3cSbjdyHk5SrmZD/uEbgYHb4MXld3l+9Th3JjXDXcviqUVWyxWe/vJn2Pj+Og+u/ZCQ5CSHmlbZIDveQt0fkwTF6otPYT6/wrDe5vCjhG/f17SLHmMzRqmUSe0w05pWM5mL7VI3U3GKPaymxvsKZQpJP2B2YEr94NzMI+ARz24GKYuPgKaRN9k9HMCpwFKecX88oTIK6xQuBD4p++ux2CyzKWvwcsMoo0VvoKXxTVQgjUpKh5wYBhkSJjq67BtxUvQCPT3qd7w00YmendzS+BLLLeIE1+QZ4cQS2ell+Ipjujfg9ru3Mdc2WXQZC/2uWCV5T21Fbeiqkoog9q5WofIE4yHPNA2Tkfoght0OdoYjVpebnGhlZM0ElRgu95b51sY10fCHgDKzwaT0LNaKHRAWtA8ok5FkhuDkz6al4/b6kAebRyznmtXFJg8PRjQ2G3Qc9IocEsutuyOG0wE6b1FPS24Gx+XPXqG91+b1LU3SbWNbBU/1+uzmIxbuHbJ/8x5va1haS/lcs2Rse/z2tzX9r+d8+XyHuuPRNNgeKO5PC/R7ByxdOUl5/xClNNmVFar9McXn+uxd2+TB2w9Y+foSw7TLieYe1fAGR3Wf1x+OabcnLOuKVEOqNYmWWEJnxaTdBU8CMipwQ5QupBoxM+b2TFMvKPMMYk5m2ZfRcb62Ne/cusXPf/7zBKXoZRmNREdirjCPP6FleTw2SyA6AxHII00/6Ec0kLl+BSgUpFoawRRoGulVMpPMMyWNllAc+bhxk/lAGQJT55BsrRAFcmpuieNC/D2tydd6tI49B1/xHN7eYOdHN2itD1hoNcW9PkmYTksSDEbLZ6oGHuMdmRf9fhocQQeKkHLHT3nh1DL9OmDGFmU8Z9IWRZIyslOUAjsL05GBC8Gpufw1EGh2unQ6LbYf7oB1VJMSW9YcVZZy7NnYcbSTwPL5FSY7I/LlLplJubVzxKCfs9TtMd3bwk2gdzplMjgk3D1Am8Ak0bz6BjhdEUi4+nSXhWNtOH6Fl9ZGfNee4G9fzXBHr7FeNrk33GJv3GF9o6AR9sncDru3bmDLmtxojg16pMuG1488N3NL5Sq+/uYrhNNnOQwt/sLiSW48LPnqc2e48d4rDH1NhidLNHkhBouuttRVhU5SQkilsbdHkC7NWd9hPjCuqKylrEqqumYyLSmrkmldMpmWDMZD3rt1D2cVF08cl4oFRTdaLc3WiXncG3yFIjOaQkFLiwhq5B2WOINQgVQHCqVJlYjFkuBpJYZciTQ00xojoxQhUM7GLLEPsgqyAG1j8EGABBc/u/jngtWKKvLTDIpEQUg1vcsn0FdOMt08YP1775Ff32at32OcpfgqIU1y8hQKE4dalWQ0KutRXuOCZ9yBU97gbT2XMbfJWG20uDeusc7GnuoRMOCDw4lCGR8SJsMjThcdOu2MdjNHuxDZuZ6JjTmXJqHZb7C9PaA/mrLQ7PLBh5uU5YCDgx1CeUQy6vHGP3kFO6y4eHyBpalnrdVheWmRbtVl2l3kjeVzPJwY/M4h//zDAW/e+wBz5NAtUKcWSRZX6C71SVyF3ky48nMvcPfyExy/t8VBoyB75iRff6LLz2WKA5vyP62XvH7g+OurKZujAWdOa365aWkauG4dtnYc+YpGISGz2jhM4gjOMT8qVaAc7/H7P7jO+tYBg/GI4WQqxhsuhu0aQ56lNIqcZrOg027SajZpFAWffeppPnPlKnmaoRSMnMWFgAnCodNz9sQf/zwem0UFsjgXKcPHNPgRTtVBGvTZMwvNDHHBK6Ww81QrPa9llYrafe8jAe/R84eiVqRym2tkgtYkWm6smZevUZAeW6D/V3+KsHvEzjffYff6OiudHsdX++RJhvWBzGuSRoIuS2ZV9bCuyduG5q7BGzHBDgRUpjnb6HF3si96GUJ023c4Jy4pgRxXOZyqCB52D8c0spyzqz0erI9QGpy1WG/x3jK1MK4c+0clp6wYg5dWkSUJZdZhrZvQH8MDs8TzXz7Pi9tDzPaAdtFiaWWJ3R/cohit89z777N5vMV3dZvp4gqme57O2Q5/+6UTlAtHOFqsBsv6aA+rj+gWGe+WmtdPn6fW0G7kpC1NPRpghiOenQz5vVd3+Y33D7h84ST/xavwUyawN3oPYwQIqWvHeFLRbqcUPuCczEe8dWhd4oIizzStwnDpwhk67TbNPKfVbNJqNGhkOVmaofWMTazmAJDS0tPMQKQ6eG4PB1Re4fGkAfKEx3/OYpSKpgYaG/wfyiTXHxtOWiVmccorCgM1nkIpCo8E4EREaebG74MXV8U49bdByrHZoDZaX1OFmXdU5JDhqL3Mc6ybDUAFLEi1pljqcPzXvsB0/YCH33iT8a17XDl/nMWikM1loxrTBUKi2HYTFkkl/VjeQ1AaHQKX8z7fDqI7spXU3yE2tcF6XDKz/AGaLSwJXhu0ShiPK3TwOCRjRhuN15qjScnxkz2yRsb2oWVgcwZ7JQsnLfdKTbPVZenhHumrb3In0Zz51FlCUvDwuzfJlCbxBScGGVdCh8//3NOkP3eGXktR6yPubqc8vK55ajXlcPqQ549d5870JK3mFs93pnz7+pTpzj6HBy2+fVPRSDMa3T6HaZv82BqDzmlWryyyeMqznI4obpVs3fsAowNOMc+jCdE+17no8UY2d8b80vNXSPJTUq6HRwfkIzca5mjZTBmrvMysbAiUbsrN0YiBdZGIqRgHQdE8P/l5LDaLBhqJLFYTFNpDHSBBpuQOISTKHFJ+r3aKBMgTFU30xDxt4j1uRnVBtNVBa4lTQ1J8VYQUEyV+uzbIAk2UfCwfGQHWCV14Zhhn0FTOUhmNdoqw0qb961+gvLnF/g9ucrxdgP2YtVLT4CaWnaTi6iCXuLdo3aoU+NpztujTTnNGrkL8tnyETG08VSuhbRjNZFixeWuTrFHwxmAXXdc4JzqcABACk7Lm+6/d40svnKHTa/PR3pRat+leOcH29IiFrGZ5uU2vtLRv79NPG5x+8gx5FXjw3gEjZ6HhUN5x4Cp6G9cZ7sPrt3Y4bTf4/RsNqqnitbIk73W5+NVNfvfdKV/jAZ9ZUZjuszw8vMpuY4XmlUU+f65FlmZ4ZTizW/KfvbvHubUe51JFYqd8sLGLDkQF7Gwoq6JQLnK8kKFvkhqpCOoD6nSNSW2xKNlkzlF6T2kdpRdTdeJhJwnFkbAaA2VrFLV0/3gEOKqtHNQ/6XksNosnUFkf5SuaTAdypcBDpeSbyrQmRXBwozRGazINmdFkiI2NV1p0McHjYtlW+0duISF4AgrvpWfR81UNqdIYHTlgMxQtiIGeVtFu1cgmtP7RXMcpTTi/wsMzq2Q3Nrg0qPB7E0wmuvlxaVF5oDlIUEbhrJcAV8Ey6aiUC+1F3jrcRGtFWYpnsvcBcbKNwaBJRj06IOQL6EwBDjePERQ5Nc5jFCy0GxxfadNsNpiWQxYXPcoPGO8PyBLNONSMrSfYmv29TT74//wGBQntRorxCWohZX3rkL3gGb+2zenJNf7yz1xgd/kzdE5lfMYY/n//4gY9ptzbPcUXn+nw3JVAmpe8nE+5Z1tM7S4PyoxvPhiyMFHcyKZkTnPV1+xujGh3cuoa3nzjdY41BNgZ+xpnYTSuaDUzZupXHzyGGICEh2B5d/cBd6qcGHsl3pmRm/eIYOnFMSdAomKcCfJ6ifJVz7llRFPzP1quf/x5LDYLESaW+AMxz25rhUkUUx+wQTZPM5WcSR0XM3HiXwOV8/KzD3ilxBNMR8/B2JvM69cgXmQCBgQIXm6ZyDXzhKiPkVtmttkIPjoZysBT5M8BlGGsA+9eOcHGeMJzB0MWDybow5qD7X2W0wJVeYK2mNREGofMjvDwTGOFd452sPFG9F7FbHoZwGoFRaOFdRO8rXHRQSY4h42IkHNu7nbTaaY0mzkoxfbWPdToIS4/ifNN8pBjdcb65gaZgobOaTZSQhKoOymuYWivduhfOcuHwx7p8iJPPZ9SPrnKK6+lXFgb8lNPF7z0dz7H4eQW2/uOo7uGV36rwZnP1ejpTXa3f8jULeJbz/OPN3JGqsvf+NpJfm5pkc3zNQ3goC7pW89Cp4ez2yRJQophWnmcC9RWMjRzmPtNK0E68DhOZRMe2AbOC4Kl5r2J/H09I9wioEkdoCbmiapZ7N8jqosw22fT/j/+eTw2C5AqRWEMqZbyKiW6viSidsR7dOxnbJBrM0TLpNo76hDwKuaOoFCR8DjrN2Rde4yWWyrKUsQKCdHSzNjMM3uled+kFEacWkm1kZlNXPAOiboIOhC8Yrso+NZSwrPNgosrJdMHtzjXWkLllfRTWhp4gokEQbjaWqBZFFTlNE6jhbFgXY1WmqTdJmm3yb1mOjokbSRC/6hriEG1SsWyxRhazRQI1JVlWFuaiye59oHDj7bYHx2SDdqcXNRAn+1pwjDNWDq+wt/56jFWFjp0F7rorMHvPlD8s1sVbw8VT+0o+mbMX1gbc+1DxaR8nzPHxpBfZeX0kDtulc2dJZp317hw8TTLV8+hu01239jgn72/z+5BxTvtIVfyJgTomkSslp79NDuv/Ba9JCWJhxWINMJZJ2COViiTSP5l0Chr6SYVp9MpN6oGRM6evH0CBcteEPRGq8DMPk88GuT0VCpg/cwZKIq/PmGNPjabxSupI20sJXItDTBx56MM1kn/UoeAJeC8oGEeyRQhCAc4IBNy9zGiXYhUFbnKH032Z0hYQE6iMH/R5cpWCDSdaA3eYYJHB1FV+kgJV/HzEbNYqjThjbTJ1pEipBNWlpvofh+Va6baY7KUalJBloKTr2Nh1Gdnd4+Ao54OMCaj2e2SpBkmEdNz02gw3rhJs1cIEOAkKgNkKBqCsJSzRBpe62CwtcXBwZDl5glq6zF5j/bxFQAGe1O2OudYXFvia8+tcf6JLnY8YP9wxODoIUtHY3rXDvhoZ8p7w4qmPs433lnk2keHlImi10/5aO82v/bkLis/HNE8bLB9d4t7/kO6L5/lyb/zeerRIefOBvK+51454am8hVGBuqrJH26RNxTZwjIqGZEpw7SuKWsX3xOPtZZMvsO4UJDDwQTO5VPWbcE0zG6LR9ogWQORFqPi0DeyOlBqLodI4pBNAplm2sw//nksNksAKhdQRt5wDVgxKAbFfH5ilORCViHEG4U5PDtbtj7IxF55H/MPZy9LkOZeCa0hnuuShaIeUbtnPUyIJ5pRCuvjFHmmr0GsTEtC7BuYgwLz0CQM662U+xfOs/XkRWHOEqT8AikrlEiHnbPoh8cIN65TTUuyok3RbKGVIQSNTg3VdIJOM0wIjPYOaTQlWkGTRM8CAUFSLZFxPihqr9BZmyQbo/WA9NgxapWytT8iVYo8Af/gQw42NTfrDne/V8rt7QJ5I+f4iUVeurzAXmuN5z53if/mh0dcurDAT7/Q59rNCdfXJ0zzAZfP9hls7rL3gw2WnzhJCJq9l89yx1r6l5v871sdzhZNUp1gFNTK0FUZ69d3edg55MLlJ9i/8SMaeYJCMxpZBkcVRSMnsw5b1aS5mFloxG4KX9FOEi5mEz4smzFjhXjAxUm+Ela6VpqER+rZmbBQMRsh+D/sRfcTnsdis8z4PMLtEWcX632c0kKFoFgmTvVtADBY/+iWACJdX055YwwBL/HbEUacKSlnL+QsA9572WzS04RoTO3FDD+WcCZuLMEE3Mc2V/wejIkEpBD/C9jKcuriRUKWyNfmgtBWZsCCB7SgNSpPcSFQNNqkaY5w3gwmE8qGtzWoQN5bojraEUZxs4gyAilBpffTkg2pFaOxABHNTgufdNmfdNh+aNnfnbK0kHJ8TVM0a0aqwbo+xl/94grdPKHfb9LuNknTlBENvvutQ/7zNyxJq8HaQpPPdluUFyqGhxa/M+a/+h3Hs72C/qU+d585SZU3uXqqR1G0eL6j6agEbTIe1iWnU01DOlPcVDNeaHD83Bke3ngXXU3xAUbjmp29Ca1OTtGUuVRdVxgM2sxsdWvquuR0kXCvzhn7NLK0Z9WEirJiFflhf7iSIFYSBB/7nUi4/ISp5GOxWQiB2oNFUKcklj7eCfynlJja2YhayBqXW0dUu0Ko9PFjCWFSTpCZ9gEEMREYMea0yyeXun+2+GPTL44fPm7cjzXkMDdJmN9aIeYgzr8W5JZpNkiM8M+EKatijIUUFUki1q5GKepJiVIJWZE+KhsSIxvX1SgV8OWY1uoSdrCHrSYELc6N2swo64ZmoyDPDISU7Y0po2GJN5pxpbH2EONK2rnlcHeb6lChVns8/+I5GtMO75YdfuliG+MdVYDRdMrwcJ+rWcUPru2zsqSori1yq7vEM70Oh2s1+5uGnz3RoZWV/H6nwGcVo8Two1e30WGPhabm8rGMxW7gGVfwg1HNpq95oYTyZI6xE8rgOHb+CTY+eo3aeSrn2dqbkjWGNNsF7a4WgimQmBxrxbRQJxmpcpxNRnxQ90RNakSwJ2vGRDFddOSZHYb4CPrILK2euQXNaOk/4Xk8NgtEMqHAey7qETzC1pVmXEEsw2b6EvlbcXaBnP5q5gYZfMQUVbx5wrwHEVZAdLg0j7yI44EjtxtxUxDiNNjP0TNJw43ctXhLzGTMM4UmEGMM4hsRPrbJZqdXRPaSRPqzrNUQr2Vby0ZQEJzDYSFIZIaejOicPs/u9WtkKKYAhSE1iixL6XYa9BealKrHW6/9mJ2qQztLuLt7QDMrWF1tMK40JA0G00AnswyufYuj4Lh5s8t3f6cLk8DKpS5Pnl2h32pxebnLzz1zBd9IOdbpMLAlOlg+GO6w+3DE378NXQNfb4+p39riYcNS/fg65XhK84mT6E9f4Ae7nr+/qfCdLp2VBZ7+zBr1wxFjP2TvaMDJM+e5/s5rJEqjleFwVPFwa0ynMyTLDc1ug4DDu1LYFUYjGv2SEw24XVVMSGKGjIp+bbPWXd6jWU8psg/5PZmfyYxvHgDzE54/bT7LbWAAM/AnfFYptQj8I+AccBv49RDCfsxu+XvALwBj4G+FEF77pI8fkClqCLM47+jiEksm5qWLLDj9hyj5sxcCZiEnJiolZ4iXihQHWaxyCwUQv2Hv4g00o/LPFJWxjwk+lm9InWce/blSQCwX1cfRlPh1PTJVeEQGFUq/zIdSJSigQk44kyb4siZo2STWBlABbcT/GOuoqhptLN0z5xluPsQeHaJCi0a/Rbvd4NyJFRq907z3w2tsHB3h0i6h0SZpa8aVoSan6BraaYpqVpSjQ053M6wDWxvW7w2wpaFTrHDi/HFOLPTxiYHUsNYoOPSWew6+tLjKZy9m/P76Hp8dTDn5/h3U2/dIJ5ozxxco91fEm+CW4czPXeTbkxJ1zEJiqJKM3556fu2ZHrz7kK3BEc+ePo9XObYei8DOBzZ3x6QNQ7eb0mhmkBhcPcUUTYzJo7d0TWo0J9IJt6r2fOD8R9f8/7+9Pw2yLD3v+8Dfu5xz7n5z36qy1q6l9wWNbmwEAYKgKUoiaFKSpVFobJmOsOSxR475MJI8EY6Y+eSZiXBY3sZyaEZjhSVTC80gBEgkAUIACYDoRu9LdVXXvmZWbjfverZ38Yf33KyiBIAto4kuRdTTkdE3b97K++Y973PeZ/v//1Osi6+ufiTCsK0QoQ1QkYN+aDnL573392us/A3gd733/4UQ4m9U3/914E8Ap6qvFwkCRy/+6F/tq875dOv7A8wH8IecImxCdxCfhkrUfVWv+yoiB5netBzp78sXqp/JKrmXYvoh34++DO8rpyXNKuYV05AKcUBcfkBaPj3Oq1NselwJEU6+WEAsBDUVBjWFCOwx9UYQH8XZoCvjzb01lYEgQwkfGm0mQ4mCmdVlytLSiDSPnznL6vIcUVly4duvcvnWHdL2CnUcqhxRjnqUu4ZhalA1yZ7WOBNC19Ozx1k5vswcgq2dHQb7Od2kxdv7A9ZnO8QobqYp+96yVG9SFBnWFDyqNGefX2L29h63vjamV2hE5vHXxjRETJ2EVqPFbGeeU6cF6V6OyXJaTDhSu8jVvmI0GrItHTrWLCwfZuPSe9XokSNNS+5ujdhZatCoRbQXJLWkjjUGp8MwZLgslhU15pZsBwm86hK7g1DZT7dZxR4a+JSr7CfkuVNYx4+wHycM+xLwuerx/0TgQP7r1fN/r2LO/54QYkYIseq93/hRv8zhDjZktQ0PTgaqJF5V8ngCjxZhGnka2oRyYODmmtY7wiaW4YQIFcRqvwfYr4eDbu+Uhzh8rr6S7QP8VN47LEbI0BAN2jFVXd/fo2eKVFhn6MnIaiZt2oUO/aNEhX/jpn7sYW5hLggBVTJzBzNktmrIWV8VG9zB30AxJvKGcpjz2nfu0tGCMytt0rjJ23mNxU5MoxEzLhzG1XAmpRYrOp0m48JT5IZDaxpESbezwHK3xUtnupyaFxw63GQSlYwQHKnVWavViaOY2wjm4jY13WDxwg6yLkmlZRKlREcXiNcXqY9K9l+6hpm32M/N8drtt9i/tUt6Ycihpxb5Dz+hefLwkJv9YyAkk7LEOsOhI0e4ce4dYhU0NaUS9AY5d7bHLM02aOQGG7mAdK2UPKWXOJvTkIYGBX3iUEae7gI/JcEKN0QEeFlRAFe7yVWyJB4+lATfA78jQnLwtyshouX7HGATWK4eHyh/VTZVBftDznK/8ld3eSWQ6nl3APgScMD+WD2s2M7vHbFKClwVwoX40x9g7IWUOKZsKRzcaZiGWl5Moe+hcmZDbT+SlXCP98EZpt+LaX4EXoaejQOE80Q6MPOr+z7QqSy5wx8g8CyhISqnJ959sNe5xXlUHDEZFWBK0AoVhRGMUK2ruMsAlEcKW/FAe0SthmpKjNJcpolzEXk5Zt/mRCWMMosaG5YPtZD4MNI+SBEe6q15Lg0m7G9sMN+r8dysIkoijsR1FmZnadcb6KTO7bLgqIqYzXM2bt7l19+6yv63ztNd63LyxVPUjs6y14bvdya0bU6jNqY5zEj/xfs8+ld+mssnl+n4Hoceb3JOpaxNGlwbhMHYUVYyyVNWD63ihEY4h/KCSCmK0rK1NaG/ktKdq+HKCKESrA2KCvlkgo4jpHB0mDASSSD/IETN4ZSp1KQrgKGS8uDUdm7K4hPot/gRp8sHdZbPeO9vCyGWgK8JIc7f/0PvvRfTLPsD2v3KX4cffdTHwlZVrCr88dVdv7pDSAjjHBVwy4gweu+pZPBkNVHsqcgtwmiKPsgfph37KVHflLQPhLdIFRwzUiF3Cror/uC1eHHQZwlTzCEGkDLMHblKcg/BgYKZr/KZaeNUABaBs74a/nQHDrs4O8vs0hzj3nZAelZVvFiokFcdDEtaJPpeqdOJKo9TSB1hEUwmBflenzwvmJvv0Ls6YDJM6d8JVK7Wh82Cirl+I6UYZwzHJfncLHOLHVxfkl3dosSCdYxLw7goOLLYZnm2SfHGNs2tjFlqzDcajG/exSWO9L0eP1U/zuFf+jiXn16nNxmTNxTfTLfZGRuUKLn8Bxlnf26Jc2aOi5tXKiEjS38y4fDcMq2ZLuXuLhKHsAHPsrc/ZntvwsJCnXoz6GdqqUDCZJTTmgk3lqaYIMVc+JymvHNV5OCdC+Ty3lUOMj1X7r0Gea8c8IPsAzmL9/529f8tIcRvEKQm7k7DKyHEKrBVvXyq/DW1+1XBfqAJoIYIpVRCGXZKYyqmYZO6x+winTjQMpyOzuPD8Tol5pNVUh1VJ9VUAG1KdTPNaw6yIe+4h5cLoZ6SgSzPMy04TB1A4QnO6sT01WEIz/tqcqAqMvgp9oaAzMNVVTV/L+SUPlTlDh8/xsbFS9V7KZSEsiiwRRiR0VqHvoEIjsaU0YSwOGNDODvaGTA310I1WuzdLcgKjUg6CCRWqOpEC2yM2SCnuVBn0SqksFy8s8N4ZBhdH+GMR9SayEaT2YUZPn1qhYWNPqs/9TR3/ul53KBAvFHQ8AKjHO2lOZZ/5nG2jOd7y21u3PQMNjPKfs7MM10+89NzfPflfcb7DT42W+NSmQfJDg9b+/scWVxj6dBhru/sEElJrEOD1TtHZhx5bjBFAbVa+JO9pzvfARFC06YogmxidX2kmF7NgJ4NA5O+UmeoQueqSR2uYnUz/CH2QTQlm4D03g+rxz8H/D+4p/D1X/CvKn/9x0KIXyMk9v0/Kl8BKPGhj4KvqIp9JfNcOYS1FWNL9cdXH4aochmcOyAVV2I6FFlNmorQ0UdWHfaKLXIaDgmoyMbvNS2nVbQQ84YPu7T2oNpyfwHBhP/h8diKOsmaapCzEmDCBfwN01H0g43uD/6WuaPrqFoNm6aBsLpi6Q9Vu4DRkVIGlkYXHFFHCiU11oCIBGVpyLKCRk2TlAVzM5rhcExrpUOSRFjniZxHlh6SBmluMFmK7U843Jjh7LNnubBbcKXshypYs0nUbPDZ5w7x+TOzXPxvv8n22++zeGyRrD5mvDtij31UXTGam2FnPOBKaojqMfKOYzKsoyLF/isZNxdu8NPLESePKnaGgn46AAJf153ePnjLkRNHufT6m2gfiEu0qoooAmzpKIt7UuT39748Au0ypJ82IgmYIDEtzgDCHeTDrlLKDDfbEC6LH+Eo8MFOlmXgN6oBNw38A+/9bwkhvg/8IyHErwLXgT9Xvf6fEcrGlwil47/8R72BB3IXKj1aVvWJaX4R+I8CU6SfNpYc3roDggFjbJXAiQMS8YOxEwGqSvQD3asPmw1/IO08LcaHD9gdcOP6ylEcYXMbqC5OGLSEe03I6ccsRAgThaj044WvVH6pavoVS6KcNlepjjtJc2aGzsw8vfIO4aZRMeozvQv6+07DqpJjAwwbL3Be41UDWc8hrmO0YtjPMaVlmKWMZYlzjkYUMdOJaDYSJnccpYsxdcewP0D1Btzd1tSOrCIaDXwUEbuCdy7f5FS0xfBojwEb6OQOc598iqMnH2e+1sGPDFe5xUt5gcNw1kmuXd/kma5gRUjKPryTtjj76RX6rskaJXhR9bQk24MR1hmW15aQcYwwOdoFtk8hQzitIlV1CKaNxerC+dBz0cKivMVUdeBp03g6uX0wvXHfzpvKsYdRqh9T2rtS+Hr6Bzy/C3zhBzzvgf/TH/V77zcJNJWsjkOBVqLi6KrCniqMEfJeGCN0xRhZ9ShEFXopUeUWfvq7p1xc00TZB2xKVUYIyEigCocEgKuml4WsknNfDWuGfGg6ZiMBWTlNwG9Pm2GBailAku+VrqchxTQ8mCb30/mDehKzcuwYvZ27WFMGmQl/X6h1EDIGmLVH4l0ADSAEj7/4STpzS/z2P/l1+r0JhRWMBzZ8wn1D3EzIlGRkLBM0c97TWu5Sz0rS3T7bTct7596l1xsjRj7AeFcanFlVHFsYUgwSHvlEk/k/Ocf1xudxcol26rj4tXdpNPbZXhpze7zCxvkRL/cUv/jIEh/fHNBIYrSq8bGNITtv3eHKsA3NOExnOIe1jmGWUhhDq9mmu7RE//YNQlARJDusDShK58M8njIu4OZlCNGtDRXSuvTkLtBUBZTkNDqQ03siodrp4WAXgHP2j2TJfyA6+EpAS1WAp4rmKOQMslLFmg7HeYTmgEY1bD4OWCvhvp5MVTe30zuJD+HPlD7HiWk/JQhwHpCxVbBkcxBW3Wtk4aGqjSEJJ5ep5PssVCM04YjSSlDYkNarijLEORt6NlQVGcnUZUFIajri2NlTXHztVUqXQ9UPkC5cZqkUqgoiQt4iAvJPSE5/7AU+8W99gevv3yRyEotgNLFM4QA2g/xOiegqombEfL2GU4pskKLMPs2GRBSaztEV4iOSYjNi/vAMh56f5U8s9JkT7zEcZ4yHE169M+QPBt/keGeW7Hob/c6A43/tM2SjPZ46tMTGYMJTjyR8/M2LZJf22E4KRifr2NGAp07M88m1mDubJVeThLIs8V4wKR3jLKXeqbNy+BC7d24DDq0Vk7JgkpYVIC9EElKWKBTGOKI4rpqLjtPtJu+OSsZVH2Wq2RNSWRdwUHJKATwtK3OvSvoj7IFwFoBYViMlAFSTw0xDKqpSq0BUZdR7khBQ3aarUZnpB6Aw1UYPkOIqma4UDCQVi8oB5Dhg+6dUoAdNR+6xtE8bV9O8xbmKtLzKl6byeYjAGSDE9DQJ+VYgOA8VPFNpTfqqiBGGNmF1fY2ltUPcung+qIcJgcdUuRaV+pgK+vQSZJTwzGc+ywtf/BxKSYbjAY2VJbIrl5ntKqRu0NsvyTOLc4LDc7PMdhJoRAyNw0YeL2YZ5VB4w+DWHUQjIR9rzKUee9uXuNZRtJuep1csx5aOc+yI57F4g073NObUCW6Ul7n1uxeZDPc48kSfpfc2kZubfN0WrH96Dc5nNG5a1gYNumsn2d7ZQM01aQ1bjEYjvHeUBvbGIxa686yur/L6y55IaZS0lNYxmhhs6TCFqWb9guNIpXE+EB9a56gJx9lOl3eHffJKT2eqquBFiFrCdgnXWCkZRmSm1/ZHxGEPhLM4D0XV5zg4Ihx46Q6GI5UQyPuw1CZEO0EgqPpnxoe0zeIDZaoImMbpCP20aOCqE0WKcMrcS9bdgbyFQhyQT4sqh5l25vEWraJAvSPCRZjOfE2bq/d6PuF0OdCirG4H09wrwApcuANWTbNnfupTbFy7RlkUB4UID3gbQg8Io/uq3uJzX/qTPPLM40EwyXnajTpYiObn2dveZ2UphqhkayunvtBAk9EfF9y8PcQh8bFmeWkRsNx+7y5SK8QRTzLXxu7B4P2SRz7V5hMvrrGpWtS7jpYckahD3EwPsVtMOPEz67QiSXa9TmuuxS8fLdi/scf8oyu0VIN+qWjf9ggtuXz3Lul2RrcdoZQMim9VmXOrP+D0qmdldQkVx/g8w0uB0jGjScFgmLMw36imicO/mZ7WXlXCU94wW4s45Ru8tdcLtFpFhpaKTr1xXxofbj6uIiSZVlrvFV7+VXsgnAVCNcM4W4U3QHVqBFiwD/2Jg6OkIsSbHqX3kUIDFeRYVuHKPTK9af9k6o9TAglXyVII7uv4VwUDKkdQhI6997LisnIVDr9iCCH0XKQLeRZVn2RaZAiOck8NAH9vti3SuppGDifY2ol1Pv+lP83vf+Wr5INe4B7DVLotIQuLmi1+6d//SywfWWM6vOm958btDTZu3KWzXLJytEG93SJpW7oznsJrJrlDKkEt9qS5RYqYIptQpFBbXCJZPsSZM/MkJ5rcvuno7wncYszduyVHF1NeOVfw+ju7zKuLHF+9ydn14+wWgiypIWKY0S38/h7NnTa19xQMcw69cISbept3XzzJJ47P0D63w2V/jXPDHTrdFUoT8rveZIzzhkYjoT07y2BrkyjSRHnBJHX0BhlpWhIXJUktCbujukFJJFJprBmTqGWW6w0WxR1+47Xvs3V7g/Zsh0++8NMstGZDuCwCIckoG2OcCRBxLxilkx+6Rx8IZwknQ0C7TdGH0zF1fGCnLKcSA1MIqZiOu0/HVkLvQcjpnX56VAe03HSkfnrU+ip0EyKoF4vpHJH3qKpYYH2VAKr70ZDTUe7gcbLCykxDMH9QmJhKVEwHKe9/zyrEFFSb3B00VoUPZfHTzz7B2rF1Xv72N7n1znsMd3s4ZykKgxGS5z73GMwGBecwViN4+aULvPbaVWqLHQR7TDJHOi6oz9WpNzRuKMnGmvZCzCNHLLev5vSGnoFNWV9dw5dD+jfP8db5LJCTK4mQmu1jMXdERr27yhPrXX7pxdO8vqU5n2l+/siA+dZhJmnCd966zY1//gr+Zp+oW2Bsn+R0i5X1lKVfeYGv3YHfuzLi3/l4l2hmla/9/b9PoxlQm9Y57u4Psd6gRMTa2iF6W3cR3qHjiCwr2NrNWFnK0HVFLdEoFeDjQoSmsFICVw6CMJQQPLF8lPear3Bl8wrlZI6rt6+RLxuWm7Nc3b7DxYvn6O/exZscpSUyblKk2Q/dpw+Es3gC9VEIY8ImtM4HztuQdITkXNyrLk2rSWGHhvGTsIdDdjINfw6qH5XEnqs+yPspYQ+mlP29wUrvOWh6HuBgqvkhVzVEVUXANy0zh19YJfPc+1ZKGSo4B/FU5UDcm3a23lJaw95wB49job3EjfFdduUmS588THdvmc33ryOzjKUjK4yaKW9c+D4/9czPEKkGzlpub21Tjndo13pAHR01GPY8XtbYHOzTjDpMdnoIURLXYkTTsNipk5oxgzsbdO6MOX72MO1GHbVfMJqd53xR49M/1SZbm+W5pTnm5Xvc5AiNPGVreJs3L90g1/tcdfO0s5IjR5Z59caI0coC+fIc/+dffpIThxKubg3JJpbasRaHHl1m47bl6NoSRVGikxiPpz/JKYyhHsccPnqIN994PZDCm7APBsOC7b2cek3Tatap1T3eW3x1TREaZ0dIUV0/FfFvf/pPsTfo83svvcNLv/XPWFhf5sipxzn32mX2r19BSU+tFeOEpTXbQYoHPAzzgKlOinCHCIrDYQAxVMt81Tii6plMo/+pUq0g9F5E1VgUlRiSrxzAVyeXrhRstZqGQ/7AIWTVzJz2aKZd4IOpgMrprAwOYKxlUkyI1L2P0XlLImN0VTgQVdikxL3pAsQUvRcQmTd7W1y4fJ5s3Gfcu0V7Zob5o09w68o1RhtbmOU1XL2Jm1PEImHiRxS3bzD79LPEMsCV797d48bdTeor84hCov0Q7zZYWFuhl2uiZIZap0vnRMzOuauYkaTW8oiZDnONDihH85Emvas3GBhBZCX19i5zUY23fruJPlHnVlmwMxywl13iSLPD4nHNt8oVku4cLzbr+OFd+rMtXj11GpfUmJltsLG/j8pz2mvHOTOfcaauSXcdZ+YTLh87zYW7YZDdexgXJYN0TCNpsra2gkpq2GyC8gItFeM0Z2t/QqcVMTub0WjV0XFQGFBaVcQdIqg3i6BknMQN/tIX/wz9/ZRX3rzAYOsu7+0P2b+ZYSY5LlJE9YDpT4fjeyH9D7AHxFmq5l+VuN2bMgvl48CIrrDehM0+bR7d10+h6sNM0ZEBHXdfR14EIgzBNLmu7uzqD4/1h9KyPZgrss4yLIZs9W6jpWZl7jDGWdJ8yDAdcf7SBaK4HtZlDUIJHjn+OHPNLnVdC2wi9zubVpSmBCQ7o33ubN/h7Ve/R+/mDcosRyeefrLHcOTJ+gNM6SgmI9Aaa2oYVxLHhlrsaNaTSkXA8d23LjHaGVOUObPtBF/sMs4lNSXZvzpgebGDSQdQOEStw7iQjHeAnRQdQzvRdLSnUUZIIVHSIYcZh6WjMXGc/cwJ5tZus9V8Atk5zRJLDMc9Ti/VeXnzMnPpFfJOm69eHqFlgbl1g6NRl+NzZzhy7DROa36lYemZAj/u89bQMzj6OLr3B4EE3kHpPP1JxlLb0W63mJ2dY3tjTKQkpXQUCLb3JnQbCYvzNZqdAhXpKo+jinMNzhUoHR1MWLTqXX71S3+Wfvp32djdwtsxUB6E19YE/Zt8XEyj6x9oD4SzAExRbErJg9AFG5pKqhppnzJDTje291SMhZ7QsQgJPd5C1bOZCiCFClLQGZx6mIAD/Mk92C+k+RAhFYV1XL3zPtl4l8HONrYsuDa3TDnZxeUjvEi4e2MTkxegIuqNBqUp2Ll1naTR4uTZpzm5fJxmlNDPx0RKEbuYCxvXGYyH3Lx6hdH2Dr1bN/BlIOaLY4nJh2ydP0dU72JSg3c9rNsjbswzGRqEyMmyjK07l3k7qjMpNVd2tmgutWnkNcp8gowSZhuAG7L4yBHczgRd1yjg2FNrTHYHUBiSRky9qRG7E9LtHkpYGp02NSuJjCeWini2hqwV7A4a+N4mw/ENfu9yQaE7fPqxdUQjYrfhyYv3+cuPGc5FJyjiT9KM5pDLMwy9Z7/wWK1oyIhI1Bj3Yt69W/DUzCybgz5CgnSO7f6As2sKhOT4iWPcvn2TBNCVvs54bNgZZuzuZzQ7GSoSRDoCZIV6tdhygo7uKYR5D4vdFf6jX/kL/Ff/6O+wtbdDa14x3vYU4yD0igRvHM484CeLIGCn3RSf4izec9A4cjbMAekDfHs1GFkpgiFDWTk8X4VoKrzQV5BhXzG0DKpNW9cxWZkfhFo7w220jImV5sqVV3BIBhNP/9Z14thSZBnGZGxcvUZSj0mSUEDIxxmutCALbJ6ik4h8sM+4t02RTRgc22amPc+Vq+/RaDY4vH6a9y+8x86tu3gHo609RhsDlBbUZjS5AB0J8lFG2kuDTkuZIBWk+4HhxWYGnOD2lQtsb+yQ1p9kkpbM1xJKa4lki+FklVJAM57QqSny9hKFhbqzRJMJ8zNBxgEscV1TVGyOuYjQiaY3zunt9qHToKYnnP/ODRbWjhI36+z7CWvPrXF8do0/eLlHu9vhrpjdagAAN6xJREFU504foTV3h5UVxal2yjv5LV67sc//+2t1Pv/JNb6w2MLLGO5MuF6zLM3CZ0vNijnM3VE/kN9JuLm7F9hupOLkySP8/u+F66i1RilDnhv6o5z9YcnMIKXRTFBUkYAKYbct0yrEDif6dEDy6NJx/oNf/HP8rV/7/zNOx6ga+HHIj2uJojxoD/xgeyCcBUJfIozhV1xPIS67NwxZVcmmhSgpQmOvtIZEawyBGzkzObo6UbSU5Lbk4sZV+vs9TNpnNOih4oR6vYkxJWVRoKUgG+whkhZFWlL0N4ljxbBXUqYlUpdY41CxAuvJRinNjiaqCVxpyUYGKFBaYE0UsDHKsX/rCns3r1PmAu9yZpda9DauMRoY+jd7lKVHSk9UV9jcYtLQdMvHBS7zxJ0IZx2TnRSVKLyTJC3QicMUCZO+JaovoF1Et90lG40ZjQs63SY11Qn6iWWHtAw5nSs9w40hM3MxQtbBG1B1JpsDhpnj0edOsDy3QOwMBs97PcfVIYwiQWsOTh/pMru0xhtpDxuD793mWCel0Wlxt79HPzmCvz4gaSWcml/GryzS6+8xa3IiMYtyjttv3uClY3UOL9dZm4G6nw8EerYED71RetD0WFhYoNVqY0YDvIMkCsOs6Shnfz9lPJuQjjOUrIUGow5DmdYMqRpjB5CPkJp6Hlt/ir/087/If/9r/wvJjMOk4MqqB3cApPrB9kA4S5hymioDmypmDhNT04ahACZlzub+DkkcM1vvcPHW+wz6eyT1Jq1Wh6bWbO3eDb9DJ8x25+kN9rl56TJmPMKmm0SRRGvPwIVju8hzJv0cISMaiysIGSN1gzybUOQlxjjMOMc7Ta0FSU0wGUyIa3WElOg4RseeYmIxmUUoTUmBjiXWeNJxgcsKonrEZJBS5CWDHcNwo4fH0pyrkXQ0tnToSKMijxCW3IXWqNY1jClwBbgyIxcFQiWYbISqt8nMHPlozO72LlppVLeB0wnWSZw36Jpmtzdgtt1Eake8UGPr/U0aCXQXZkjaGjlbpzPTYDudsHHxfbw1gSZVSpLUsb2dkiXQvymYmAYnTtTZ6beIj0fEUZv3b1qOlEN+en0dcb1JerRNNtOi8Cl/5VPHEThe397l2M0x3BxyaD7hm709bL3Bf/zMXDgVZGCgHOaGrMho1JokScThw4e4dK6Pv08RoTSOwbhkOCoZ9TOiSKGVwhpDFGlMMcCYAq3jgyLPQUgGPHXyWZYPvcLdrfMIKcn3S2otTdSiIrz4wfZAOAsQBEZd0Imc9i7wgswUTLIJw2zMzdtXuXvlIt3VQ9RqTYa9HfauXseT05lfQLiSdLQfxDtthFN1cJ50sE8UQVyzOONwUgCaLM0CeCtOUHGbMiupNyVCedA1ZCRpNgRlHjQsTWnwCNozTZKGRkYCHSfoWLKf5XgFSoPSgTNMJxLvzLTBQpFaRns5490UVxqSrsYZQ5kZdCQDvRGgYkHSrfo2xuIpcVlISFWkwBmcL5mdfYL9XDC82iPfGiPLAtXV+KMzNOc7wWGkxmYZhY5IajFeCWpH5tm/02N4e0ijXaBrCuciXjx9mLa3RJFCuoqIO4r57tUxu05jVpqcfGyeL50+zNYoZ6+f8frbA1CemVrOW+9fxx5apHlsFltLyJzln711icv9gq2sxl9/fBU3Os/jMx1e7isaUYtvb/XZ3DDML4ATktxYJnlBPWkggMcfP8vFc+eZYpq0DOjJ0aRknFmy3JKnJfVY43W4wY4nfd545yVefOJ5El2jyvSZ8lUP0hHzh0/Q2+uzt3sXkxvSvj0Yuvyhe/SP2wk+iGVlwSTPSKIYay1ZOWZ7b5fNrdsM9reY7O8TJzGTvbuYskQUBRvDEbrWpEgLvJswYgMlDVlaENcExciTj7cCmz2h0+eMIC/BlhqlPUIJisyQp55uW9C7s0mmNLWmJqpFaO0IeAtNsxNomSYjW8lfaNJBTlSbThH4IJTqDB6NTkBqQ2dRUo6h29XMzCg2blnKocAVPgxKlgGTYkpL3i+JmjHOlCgtSLoh9BIyohg7dN0TtzwmM4ioTSmWqQlHfalN1ozwCgbX95nspuh6EhCgSYRXMXuXB0TzEUkjotltUEpHQ2r0eEJeZMhmwZVty/JsE5EavAxlSVFCd16zdaVPanswO+CdyRb1WkxEg+eWBHtFwrWNGp/8zDFemsS88dIWJzb36Z6e4fZdya2iSd3ljMRrnP2rHa5OGnyi3aVvFN++MabXOUa7vIQACu/YGvRZ7M7gvOfIkTVqjTrp0KKFJ1aSTEgmhWGYlkwyS6swlIUhTjQeQUMrvvXKeQor+OmnP06sk4MJDiEEu5MR1sPcodPk+xG+zEhHOdnAMCVk/EH2QDhLOh7y8lvfBZ1g0xH5YIPh1hZpf0iZlSit0VqC9MhIMRlaMIZiMKrmewpMnuMUQSVLKWTi8WODECG3uK/GTD7xeF+SNBRxLCnTCf3NlMlOQa2doKMEW2aUhUOiSUc55UQyt9xAR57xyKGUIh3mOOvIJxnOOZIkxqQlzhqUDpXtejeiNhfCK2stKnbU2xJbSLJ+hk5ihHDUOjGqI9Cxokg9Iha43KMiidCSWqeGrBlk5JDGY3yXrO84fKSBNRPKpE7hoejkjMcZmXGYvCDaTLFbPVRSp9zIYVFhjMWWirF0zDfqLMwIluc3uZNGvLa1QHFXsHyqy/qZLme6XUZCcvxTNVaaLZ5bmGXHepajhOt5ytev7VLbkSydLrjgc44sx5xcP8QLNiJdm+drvQHX39ngV55NeXb1OjfKBd6f1NAz8+z1Rkx8wuzyArXd62S2wCPYGgx5jPD5NZoNjhxd58p7F7BliZQQiUDlmheWLDeUJhB8W+tRThAnEfOx5yt/cI5Oq8XHTz8Z0JM+4Fc2+yNKC41OgzOfeoxh6rizlWERiLff/aH79IFwFm8KNt/9A4yx+NIQ1RTWVPjv3TFxPSJuVmPYZYEQOVKHMq/zHh05pFLBUUpHZl1I9jxkowJdCqKaxBlBoyODCM4EvLOYJBBSCgG+DJDmfFKESWGlyPIUZyxFphGiRZR4VFaSTXpkowkm0+RpRq0dQ+xpNGKsKbHWEddinBX0eyVSS+7mlnpTU2sqyjzGWofJDcJ4aosJK8dq1OoqlEd3LXkOZVaEosKkwA9LVCRRsUK0F0gHOW6omIyLwOmrJQunFrDX9ohVgt1MGWwGNKLPUqRwZJf2EUmCnumyenKJs3NDci352GNf5Iiuk79VsKGG1BYSHj+8wFxTg81ZjSP6zrJvR9wsSmZ1nVc3d/A9w5vXHJ/e3eTp3LL09DHGL22w7cE/v8a1VpfaTJMnjmYkseaRpMO8GPDtXsYTDc0rXvFLiyWvbBYIHShwN/b2w6RFhYQ8deoRLr93IZC5E5CxxsBoXFAUFdzY2jDlIQPJx9GFJt+7tsPvvPouZw4foducAyTOFmwNh6G/4j2F84yNooyaGEsFKPvB9kA4i7OO0e4AnSiEcJS5xGaefBwoUaN6hEktJjXhNbGh1oqQWlD0LFJ7EDYcHtKS7Vts5vHWUJtJUFrhbBgxcYCuebCEo7uukNrjnUTGFqk0MnJk+wU6iYjrEuoSk3myNAvtHDy1lsTmEcU4xbsCVRckDQEktJoa5wviGHCC4d0yqIg5STbIEUiKocFkBmccejai3dUIpZBSMDOnaXcFdzc9G9cLioHFZjnWlCTtGkl7mfm1M5hhSmkLknpEcatPsTPGtSMWVtuM91LSrQnCCXAl1okgr+eBNMNkKZM5zw1RgFT81ivXELEm3ZWgGywuzNCo1zk5M8OZOGbPlTwWNxBCsD0a0hQxL84I3rEjHtsbMXt1QjyK2fnmJexMwuqXnuT8U6usbo9ZMZJrrTly2+WwmOXGaJ8VPeHCZJnPzEz4/PEmdy93uJgN8d7TS1OsK5AqwXs48cjx0MMaDXFOhlPCWvLckeYlZZFQpBZTtwHWkMScOblC+VbJxY2cf/ytb/MXv/AFWrU2mS3oVZHAdMS8HVuyCMZxjPgRcsUPhLN4D7YMTB5xIxALFLmhyApcYZnsjLBjj4gUeIiiUCLMxwabGxAaHVOpakmksIz3JmHgrhMjFEz2S4QQ5JGg1tJQM6gkQHvLTKBiQX0uwqSGbGQoRgXgiZsJ+ICTyTKDlDDp5ZS1BBUr7ACihqpKk4ZaDXQMcRJkxpMEvJTs3YH9GwVSg4wB5zETU3VGFekEclvQqEl0DLUYmk1JLYFRf4zLLaqpQUisPoFJLV3tMTomz0p8HEPLIdeaSCmoRSWd5ZjhLqi5ZeYVjHd2GPkSvbBEMj/PaDJi//YOM4/WWVtdIDaeEyswyAvyjdt8++4NXpGeyHushG69RrfeYCQFwkUUF3Y5u7ZIfm2HnaZj74kWazRZHEb4iz30ap3NTcGgSOhv9Pn5p7r8w2s5Ymj5xUc0T0W7nFmqIVHs9sbQBCzsj1MmxZhulCCloNmscezEcS6//Q5CVNghAXlWkhWWvLrpeB+UErQQrM3UWJltcHNo+c5723Sb3+Hf/sznGBc5wzQPVFmlPcA2LXQE9VoTqdUP3acPhrMYR95LsbkNm1OGpmJckwinA4tJ4lCxRMZhwncyLMBC3NLU2gpvJWUWmlNJB0YbIel21mIygU0duqEoU0neL3DGUpvRlFlJ0orIBoZ8YCuibdB1j26WWG8pBjFYjTEWjGeyPSHpEgi5I4jbGiSUJpBBxFJRllCkHtcEqYOKmTcOVwrMOMOV7gAUlvUL+ncz5o/UySYGUoFtCmp1T6sr2anoTJ0DO4F4YR6Vpkyu3IW1OrIeIecamHocOICVRLViRCpgewLJPicfXeP5oy8w6Rt+/41bXN/bo73YZWl5ne1sxI2Lt0kizZGVeWbm6rSiiKxwrHQatBqamaaiozzeGO68vcHdlzZpxw3KTYOWAvNTx4kfP8bg/btEmWXpE2e4hGXlUEH9yi7zosWx2hK/dGTI5Rs5DHK6q6vU64pBv8dqu852MQIpSQvLYJLSbVRyhhKeePpxLr79LkpKIqUQ0mEQZIVjXFhGaUk9K0LFz3miSPK5J+f4J1dKStvm6+e3WZx9g/XVZfLSHkiK+Io+SwKNbIB+0Pss3rnAjSUFtjDIukXXBVLXqCc1vDfgHMXE4b0J+JIoEDVkw4K0B+U4YEGsydE1BVoQdxKiWoTQlnzssYUnaXlGuzkmdRSTAhkJklYUgCjaU5uR6EiQpRH1dhihcblEqybpoB8UhIucbOAD4EoY8hFELUnSkuTSEcU1nHFMBqAihVQJiBKViOCo3hLpGJVIbFnixyXpIGfQk9jMUBpLd6HF4loNU+b3EJPjktbJkywcP4T+zkUmhWf7ek4RpSyu1pg/tMD63Crv9nOGG/uMNkcQNymt5bvXNnj56h2WGhEnji8zmO2idJ3ReMhpFbM+08Y/vU7hBUXu+d65HbyKMbcmKC35uUeX+LjLibZLxLjDYNgDLDaaIE52SF6+zvOPPsL+55/iUpbz5bdvcePiNqeeXKLbWmR7MqEwtxhHmhceP4Lv7bKgDXme02rWUT78fbIVUxrHZm/AofkAiLPOs7a+SntultHOLpEUJEoxMYY8t0zSKtHPS0xpcBUvwvOHanxtEyYuxiQRv/G98xxfu4UpAmGIddUEiK+a4sLSjB7wMAwBuqHxpSMfZjQ6AhVrMCWj3RyVVP0F7aGEcmIQsUTHGtVSlOOqrNsfQVFiZxKiepBumPQL4mZEXE9wORQjiJsJUcOSDTNsGYR/dKywNRtIuFWAVhYpRLGmORujkPTvZhT9AlcWeGOwEiBwcOEjlIqQyiFnPDJ2xA2PjCxlahnujpAxuNKE0TUsQoEow1yaqil0IlEiwk8E3mrKXJG0ajSXakx2LN4IVJJg94e43TETB0NbsnZygagu2b69x+a1bcaFQtTn6Z4+TjkYk+/3cJ0ca1M2ipLexgaPz8/REIrd7Zz9a9foTcbE5y/SfmaV27dTendyEAqhA8b/TrTPpVnBzreuBxh2R+AmhjldZ/YOrP3JM8h2xHdfeYvvvHqNXq/B7MnTXN2IWE13eOT5Gd4YtuhtZ1DfJ71wC3luyOE//yTpcos8LWlImFRI053hKGwMEWYCIx1x6uwp3vjuLsqJip/dM0wLukVCmjvGw4JmoySpl0R1xUzkOdyUnB96okYNl6zw/rUb1Fo14kYCThzAw6daOQ98n0UogTcF5agIITw1hHMYM8HmHm9iSh/4f8u0BCtpLNdpzoRysmtDc77OaNsy2gwy2EIFggdvBOW4JN/PqTWbmJFFaImMPY2lMALiyoKoFthZJj1DazEiaQlcGRqJSjjimqfRrVMMClxuUA1N+3Abm1smOyV4hSsdJvWkaUGkFbWGIIoCorLR1aQDSzKXIJygHOcopYNokwq4Fq0FWWaRcYRHUWYOIWNmjs5QZAPsWNCYn6U5Lknn6lglaexWozBZQZJoXCxR2pCObjK4NIJWl/r6UeR8h7oeM1ED7HDA27s9ZncHPPvkURbUUVqdmDt1x7mi4ObVPj4vQx6mJGjJa6+Nab/YZPkzC0zOFSwemaO8M0Lfyhn4gvd9ydffuUBTzjHsPsmZQzGf3t+i/ls3aTx3hqURvLRzjqOnV9i4UefuxoTOuVvs/Jf77PzlT/HZ584yvvA+t01KVhpu7fUDVsWrAyDg2SfP8sZLr+CNJdYRKrdkWUmWO0bjknaiKEuLNSXaRkSR5ok5uJyFEf5C1fCzC5jRXhijipIDcJ4XATMVqQc8Z5FaABYvAxOJ9xqpDXEiiRsC5yUmFeS9kua8QDc8Ogk0nLVEYYTDKxumdlshbjdFoDn10lBMSlzuySigVDSSGmU+RukwaDbp5UR1TVSLQZTkY4OOJUUa5suy1BFHEbVmkygZUw4IXMSJRMYCPfaoWIbJWe0p0pLMFrQ6TVSkyfMcXVfo0mNLBRn4ehTUwgZlGMT0niJ15CNH3JB4VzIZ+eBwsSTuKuKVeY48foZ8NGTmzJimr7Erm0S9nIVjHbwRDIZjrIB2vUWrlbB7Z5/+q99DNRt89lc+xjPHjiGlZBfJW6/vcvfmBomEp3/6aT5+9hCNbMSznyz5nTd7WGc51IKfO3qLR1e36NQy6vU68pc/h8Oze91z6xvvc217G3tojhOdLu9/6zYff/0yzy/UOb20xkgkbH/jVRqrT/Nnv3gEUVygf2yGXjJP79vXSGSd97ZzDt3ps7V9FznfwXnPuLA4bxFeMRUdWlxaZOXIIe5cuor0Di0dRQ6DcU4tUbRTy3hc0mgaoppDKs/ZecU/33KUNiTzoj0TULejHnFjinuaduHcg+8s4GktCdKewIwF1hTkKeFkcAKdaGQMuhNiWJU4hLAUI4d0gTPL2JS4Ycj2Da5UeOOxGOJOhM8dMpHEdY2VFamEgmzfYUuBalRyfJMSFU8ZKRWuNFCXRHVHkWeMd8a4wiKUCkpkmUUlEfWZCFMUeONxVpAPPdaAko64LhnvZ4x3LPnYYsYWl1viZo240SD1Y5SQGGPJxyXFpMBjUQl47TDKEdVh/kSDVnKUpJgwZsQuLbp6TO3RFcqXb7H7yg5nnj7MFz5zCh012ekPubLd4x2dMNov0Ery9luXuHWh5PSjh6E9w7CdsfbkCt1hxm//znfIX29xObZEooba8GitmZtTuMll3r48oisKIrHN+d33GbhZnj31SVSR4euK0c0dLn//Cj+/P+HYjqD74jpX37oGk5x6oqG2y356kkge5WbUpBGlpF3BYl3w5090ca/2SDfAtYJ41CjNSfOMZk0H4sKq7/Lok49z99p1bGHRQpA7Q5aVpLmhP8qYmcR0i4TEGFQUsVCDrhRsm4pnQUrkwgIeSz7sE9XDjJ8XnloS3VNu+wH2YDiL8xjj8SiKvEQ6gCh8ILUYKQVlanAGysJjC0lUB5c7JsYgIwPCoyOozyWYMpA/xElo4EkN5QSiWiCcy7MRzpa4IpSbvTdhotlLyrEjakLScCAc+bgk6SqSWsL+zRHZICNuRaAErnToRKCSygERlKPA6h/XJPnEMBqMUVphcotLHa5w4CXl2GDGozAoqlUoXyuJKyRGREykRscaMY7QUQN8hDEzzCIYZprbuzkcOgrUEE+u4G/t8c73b3Llzj6LawVHVo/wqSdPcTU5wuz8GRbrE37qZBMxGHBxP+P65m2SVofvbV1lvt2mNdNi7DV2rJk93GJ1SbOUN/jW7TEd/TiPn20z8JLHxQ1W1jdwCBrtBunPPE5+7jKHZha4K3Pq/R1uND21775LfXmOnVsjFn/lUc6v5nz3pS1+5eSAM8sztA7vsX1mhvnHZjj25A7m8U/y515f4X999/vkwpGXjn6a06o1Kz6FANI7fvIY3603cGZYccxJ8sIwmOTESjAcF6RpQa1ZEsUxjXrCWsOw7yVOVKQjQmJmFiHN8ZMsjAYpwWwzCVwIP8Q+qPLXDPB3gCcIJ9a/D1zgw1L+clCOw0Z1ob1BnhvQEqUlpgCXTydHAzWqKxRxPWI8nIRwSghcFFd4BIdueoQMTlTrWryQlNZgHZiiIG5LrPbYscFmBmcVcSPCOoMWgfDPlYEgQ6BQIqEYloiKUlXLKKypYZESsCAihTMebEyWj+m06nRaDWo6YXBnh1J5ZBxjM4AY1ahT79ZBSZJGgwKYXWmh6jUmucNHESKpkTpAKNaXmoy2xiRL6xxKUmzq4Nom/vQSHFE4axiPMlpS8HLpuHzuJs3emDVR8ORjx3n26HGWWoqVNKN3/grZrV3mVxaZHWneePc6ZjxBlAV7TUU8K4kmltFOyasLDa69GtFZbfNPLu3iappPPwEffyKnLj1RR5JdvYPub+Bw1KTGZpLlk4fYGjb4Hwez6NcGLEd7HD80z1cvGoRM2FtY4uzjipuTiPj8LQwZUQy5FRR4doZD1mbmAk9BIC+g2apx+NgxLr75JkHhILDwp5lhEJXsDwtmBhnNdoOk6YiAo23BuYlFCoXWYbLYawFLS9g7t3FpStSosdhtfih4lr8F/Jb3/s8IIWKgAfxnfGjKXyBkSTEMZM3eOqxxxDrBTHJknOBxgevWh7zAGygyh8sq3HsnoRwbUJZiWOIt6I5CJQUBva9wzqBqilqsKcceMwk0RDKRGEDVNJH0RAk0O5JyInHOECcxkVCoWgSRQEYVRRIC4RUYz8LyIiBJ6yU6TnCuTXumxfEjy7STNhu7G4iOhLiJ1wkiihFSQRwF3IWS1IWEWoTXAmUCz5aSgZxPA3Hikb2SyfUBrNWp13OEzTEXd7GHZ/H1Fjv7hs13S2bWrtOLanzq5CJ1ucTo2pD//Dd/h7luxJFjddpNQ73R4MxMl++9uo2LWiSrSyS1GCsU7bbisedmuHHJsNDRvHimy/J8i6vbEy7cGnG8e4XdrR0WmpJ6PUFuTJAywmQFrX3J7NF5lp87xObRBPXWDvFik//0Fxv8znsdVFTw+rkxh2Y7fN3U+HzWYXTlJu/O1pG+wPlAfLc1GCFVEGG3zlQ0vYJTT5zl/JtvVNStgqywiMwgBAxqmnFaC+ToZYnRmiNtjb1R4KTHe4UUgbi9dJAsLeE2NrDjnMVm40fu0Q/Cot8FPgv8ewDe+wIohBBfAj5Xvex/4sdQ/vLeg3TELRfCpYYk23OY3KJbGluWRInGSY8SEbYwqEggY4e2geGjPucY7zmiRgweTFqQy4AS00KitMcWDps5osYcVo2Im4H+VUYSMREVIYxG6QhnLbVWgi0d9XpEJCxHHlsJmPhSUBpwTqDiOg5PLlp4oSkbklLHqLjGXhTz8u0aXkfYxTNEKzroRkqJ1DLwFQtCLC0EiKA/g4SI0KkOqmQCs59h0gjVrCNrEruVU17dxYyGpAw5ebTDI587wY1Lu9y+nrO4NmZ+eYkkfZ/tuxdJiwbzK6tsT3KW05iT9Qnfu2D4ivOUueVnVzvYC1fJ0xyvIsYm59qbDfaiGPNoi388iXj6aI0bewVPrR1jXCwx2xlwoX2IxrjAp3B41EZ164ybBdunPOn2VZ564iT/wdHj/H9e2+bmjmClY5hZ7PCZI0t8+bWbPJHGPDJziNs7N7muBaYMHAlOWG5v71KUGZFODphCpRIcWV+j051jf3cHTQAH5lmBkCLgYTLDZJIxUzZxkWO1Dg1X0i8lWomKI8wH5pi4junOYXfuEv8IZpcP5CzAcWAb+LtCiKeBV4G/xoeo/KUTzWTbENU0UUOg4tDBl5EiamhMmuNSi4+g1klw1mCtJ6lphBaUQ89oqwhE0tZhjMEWHpV4xjueuKEQMjCsFKWDscGZsBFtFfbpeoxWTWxWUuome+MSZIMo0aR5k9RrciRGRVil8CoOJd44xnmB1Sqoq2qFjCPQEqkIuu1xjJYqjPFH4c42re0LGapogTA88IxJJRFCHXCiaaGw+yNsLUF4CWlKFEVkJ2cY7cb0r2yx/+0L7A4c64ueU08/w83eLC+sLaEbc3QmG8yYGi+9EbGzv8uh2YRaa5aGuIHf6HFqqUv39j6yhKjVZvXZw4ze3MIbjzq+xvaZw4glCXnGGTK23trmSn/EoUfnKfKcIwPo/tQpfvPdDfT+Dl/42CJPP/8IM/OzIARH8j2iW7f4/bPH+StPHOP94Q6/9s5FLlzL6KYJj9fv8vvzDZ45tci1i3VupwZhJb1JGpTbKn1DUUGfG606jzz1GK9+81soAbqCDhtjGI9zeuOCmUlMXpToWky3HrOWSPYnYFylz+KptHc89fkOTju+/vIFClP+WM6igeeA/8R7/5IQ4m8RQq4D+3GVv5qzda8TgS0trpDkI4+znjLLUeNAcWRyg4wlRaWt4gqDUSHHccaR9jzJbIQdGrLtMnx8WuFLTzrxJK0WcT2m3pRIWSfXVQ2/HuO1QkYxTjeJVltYBFIrnI4wKmKsdWAw0jpwAVQlZxVFeAI5n/PBMYQMiXqgqBFESVRxk6mDeFggKtk8Gy5c4HcK5UtRcaHdB4c11tJoeXJhyUeVrJtz9DfH9K5sE7VjVL3OblZHpYL65BouV7zzfo89PLoZ87Mry5w95nhva8Q3LqYcmhvx2MkV1m9doXX+JhObM24KpMu49U9fQdQkZA5/6wby1Rr6M0uMGjm7xJTM0hvCqUab8V5BnmX8+qU9JtRZOvYIz3zhUVoy4+7uLrfTgqtFm+c//ghLEdzu9zjRmWfu6Jgnr95h7bWL7KzHXGvCo5PLOJ8hhMLjGeeWrDTE+p7WjZABMfn0c0/x9vdeoizHgY/NWJyD/UHBfj9nMtegyA1JYYiU4ZE5xfnU4vICFUuiA8FVgTEO0e6yOyjY7Q9/LGe5Bdzy3r9Uff9PKmf50JS/nA13BqUUTgZmQesdogCTCmwRwquoqXG5C5ht6SknqoppI6JEEsdNiglEnRmEjlFxAkmEihNE3EC1GiitieKEPPconaBjhYgUQimUgUeOLnFjr0/uQngUOMV8EGiVGgglTMmUPxmoYmCtVdCBqbDgrmK09BUrjfcQxRHWObSaapEF8g0HmNJiTEmSRDgLUlXcu1aSpZpiPGCnKJkVMc1GE92pM/vMYURSMluzmNTSMBusz8ChEw2a3S7fOtehm3nOXT7HZLXD5M51JmNLcvoYzzx5mL2VAcNbGeutDqKmKfsl+uOzDM7dQc5KfOpoNJs8+ac+we6iwZqCGZb48jtDHIqnOynRypBPPrXGWC3x1hsD/ttvX+RPnKixMa4xWmkwKwzHDkvOrq7yxv4u9v2rfFHGtD9xgrd/9zKjrRFnjy/yifkbvPbGANmZR3jHKM3ZGw7o1BtV8zDgkryH2bkuh0+c4Op771FUp44Tgklp2Rtk7PdzxuOCRjPBmJL1JkQyqCI4GwRtlXO0NMwbi4slnYUaV3+cqWPv/aYQ4qYQ4oz3/gJBk+Vc9fXv8iEof01zBVzQ6ygmFowAGeMKiUShojpSxVDTeCJ0u4GXGq80sW4gogRXq6GERusYEUUIraqvcNx+/NFlTviU5SjitzYmXM88xgdeYqkVvrBsTSaQKGpSMZXwdtaihKqokkLTyuGRFV5bSonQIf+QkUIqEXDsCLybcmLeU1tWMnCTxbEOo/s+hI9RpKppZR8oeWxwQOsccaOOlprOeB9Xa5INHfu3eriu41jL0LR9mvMdZmLFOM9561KPQR5x8+Ymv/Cps8z1GhRrx3j9ZMyXzizR6w24/I3XscOMYcuw64a4PUdtJsFne4xbGeMERjanLPt8/x98k8aSJ27H5GiiekRWLPPuqMef++xdTpTv8PZem2dOnqG1ssI/vT5g99Im/4fPn+ap9SO8Ndjl4s1NdBLTOJ/ihrsM5urImZg7madXb6MbLyB3LyPiHKE0XsDuaMKxJXAufJ5TgSIpFU9+7FmuXXifSDtsYREu5CH7/Zyt3ZTFhYxup4YUnuOzdZ6ZUSTGcESmtKShpWAm8dAvKCcFoqH48o+YpPyg1bD/BPj7VSXsCkHNS/IhKX+BRLgZHBofC+LFGkQxcaODiBK80hAneK3RcQ2VRHzu2cN8/8I2mQGvJToKH66odCW9kgck2lJ6vIWrW33WOxOu9zUyaZAIEGUY/RdCBAy4lAdCsN6CrTQ9hKjCLecIorYhFItjXQkYhZK2dWCdIdIKYX2gJ9UaqQJQTbgQqhlrg6NwD8Mpqj7AAQw6D8OlQnpkJGgWBiXH7N60qIbBTCxRt4FghFMa7YY0YsnCQsTM3GleubLEezdusnO7x923bzC/WuPfWnGsuX1EM+bS/CEm9FjPC+z+AI+gUJ6FZ9ZplQtc7xe8cnGMQTFLzF99IaJoPspX84ynlhQv7Kbs7ghqjZSuFhxdmWF3NMP3L+7w9pUmjz9ygk1ZkNy4xYnFOVYfWaO31+Ob3/4+zuU8+8vPsPyJVb52vs/O6gjZHfPZ59b4nfd3YHYG62Fzf3gAB3YugPOECDeb9eOHmF1YYnPzdmDAN4GwcVwU7PRSdnYnLM4lSJnQjhV/8Tjk+yPytCBqxJjC4kqPiV1oEtfiMIP44ziL9/4N4Pkf8KMPR/kracLRF9BxjUgrZBQhtA4VIxGk0agwDGVe0mxqljoNji02ubCbEddjRBR4bAQCZ0J/RU6p7/EIJelbz28Pugg8pTGBMFxUqlFKB0ZuD95LnDUVA36YMRPeY6vpXyElzjiiqFKX8iHxdFikdxUzf/h3yVS3RQXSQOdCY1RKgRIKU72P95VYhrGUeREIy6VCCE8xzognlqxXkC2v05iR9Lc3aNRShjdy+o0VZn/qMdZqBYfzK1zeG/NWLnliVTL3s6d4vl1nd8/x+GefpMgGZD3L1TdvcPpwi2+XLW7HR5FuB7Tks59foaMMRR6xMhOzZxu8fGGfWAneug27Ozc4MSu58SY89UKLC3sT7vTaHLuds/bYKU5++mk6KyUXx+9xsj7kmZljHJvpsL2xyStvvc/Nr7xN82JJ93CX7L0BSivW+/t8cWGLtlZ8/Jl1Xnp/i54P17I/yQ7UA0LK5w/+q9ViHvvY02x99U6g75UCaUPI3B+mbPcz1icltVhRRAX1WoKNY6wrGewMEELRaMQ0ZupEsUZE8b8BVEhxTLK6jFIKIe8vpQqopO2mCa9KQmj1/c0xO0ai6zHIcCzDPeEhZDiyvQh38sDOrzA4hBc4BzqSoAItn7UmcBVUJ4aqZoQqDr/w/k5ijUUlkihSOO+wZQUZUIoo1mhd0bhKcSDzJkQgHy9N1ez0U9I/A9ajIh3WaAxSBB3FwCEeOABELrDDAislrckGIpL0xwXz6xHpWyk7b15F1nKuzyn+nUcXeOFYRtwc0dJ7xFEXZ06w9eJJ3t8Y89W3e/zMasITp1bYynfZ8jEX67OIJw/zqUM1Xvz0AsKWeCzOeh79pOCr5wb85uUhTx3b4HxjBSU8l/cH/M/fcjx6bpezrk7er/HuxlVeLlNqRxb4hU+f4JFmg7fevs7f/f53+Vi3y9OFYvG5pzl/43W2d/bZ+9aYpVMtfvrfW6A9Z5EokkTwxHqXb+8VCKnY3NujtDmxiMNeqVQUvA+kek888xivf/dlip1tpDOU3qAQ5IVhd2/Cfj9jYbYB1TUui5IiK2m0EpJmQtEvsLnBG4+I/w2gQhJCENcDEF6pIFAaaY3Dhc3OtMQayqzCS+4O8xB2ReFPKI3FOhfCn0gdSKqFaRKFLUqECHLWztqgBizuH9EO2Aapgu68qhgnZBUnO+vw2Eo9t5K7sIFRJCT64bl8kqIjjZBB495VZOVlUWBsRfVU8TJb6w70Z5CCJEmCgJEMMbqSIJzBlSVmmNOcTchSRbHnMBs5/YYGJxCdBCsUet/z937rOutdy1x8lYW2odWcpb48z2u3Dbe/c4nN/Ql7i22eflTzs8+M+QvRKr/1xjVuXM64fhv+51uCGUGgQfWQW8elLUd2O+Vv36jzzCdbfObUOrNHZ8lue27v3uB3C8PV+ToLT59k4msc3hO8dOEV+ldK7iZzzC8d5z/82cPc+nu/z+Y7d1h4foHBm9ssPdXg1C/PUJuRAbRG0Nd89rFVXv7GNYpYsTcq2R8NmWu3UCrGOwsV2WJRZCQ1zXOfeZHf/vJXQIL1IZ/0zrHdm3D5Zp9WIjl1aqkibZTs7efgPEtakZYlkTFgBaXJceUDTt8KgY84YKsdkY7wMkhwex9KgsIBNmxWY91BIu2MC+JCzlGrBVI1rEVFurrrC5xWobxoXeAUm7Lma0WZZSAEcRSqVL50aOHAC6RQ1YnmEQp0PcEUJWVh0AK0BCU9lI6yDAOW3jqQoXnm8hInBEIopIBIAj7oTobStg4IytLinCNLc2xZENVivPAU1qKEJGolbPd3mT+cMMo6jLb65JlispfhtWfh7CKLyzMcXmoxVmu4KOb5xQRp79KUbZLGYY4fj7i8lfPS+1u0010+dXoHW6acWBzy736xxbfLs8wME3ypWZ5JOLUYI7RCKUU/hZ1v3qaxGPPso21mrSfvD/j97U1WXnwEeXyV+q0Jt169w9kZy8n5w3QWzvDV3CLLHO6e5603X+K5P7vEE+0v0j8/4kbZ45Evtal1Kq7qqlHo8SwtNFnvRFzIPMZZCldgbUlpJ9TiLkFiBLwQWFfyzMee5P1zF7h44TxTulYtPUWW8d7FDcw4Z6YVsX5siVpT0J1pMthL6W+PqiHWUKRJh0UQwvoh9sA4S5hyA1TgOhYyLE0qhVCSsgx3Wuv9QV3dVAmdlvIgJ1DT0yIPeBE/VQz2Dm9N4JuQIZ9wUqKbDZyxGEBGcWCMseaA0T/U9itJaV9VwGyJFaCkpixybOlQKsYSknUhBa7MQUqUCE7rpKp6KoK8tMRVo7IsDYU1aK0QUUSc6OBM1amjtcbKnNahGYzSpLtbLC6XuNRj5lrIrqLTbeFKw929AYdWZlnsNDBxi5nukfB3ephVmo83LY+ud7jw5R43v2JpfHoJUVxkPL5FPd1jo7/D5sDzjdtNpPDMrLeYbydMUk/DC1b9DD21zPzhQwzjGtYLbuV94nM9Tr21weHzOR/73POcOHOG/+a1O9Q3LvMzh7b40396xFy3YGIzLF3iZcWjv9JGNyqhKanCdSVMe3thefr4HBfe2cUIwU4/Y21uGRxIoQk6n/dUoqUS/MIv/Tx/57+/TVn0kLbEBR12RsOCyzd2aCYCm1oOP7JMe75LkVaq19aStCOKUUnhHD+qh//AOIvAI1xAqgXBISriOo13BmksIpJhPL6i47TGECUxSgqsKUP9XKqKWDxcCC0VwhqsNeADZVKkVDUoGZxCao3wDmsMQkfhJPMWKUK4JL2hLMLHaKwBQkNSIjGlQenoQF5PCkE+mRBpUJEOOZFWWOuDBAa+4jpzFAdDmQIlCFBXY7B5jlKAE2Q2RwjJ3OE249wilrrsjCfMH4PRpI5PwOYFpRNYC73dPuP9AZGDmZomrivuZIb1TpvnDq2y6wzZrQHm8BqvzM7wq7MTOskiiIKsaPBPN9p8+SuOtZUmH//0AocbNQZVWTZxkiu7Q9ayMZmB5X7J/NeusDS3iLyVIMcxV795jm9tXuGlQnH048f5E19YpDN6iZv7jq+PjvAFuc16PSeq+AJspXqmZVA5EFLijefU+gzt87vsGrjbH6KEwokgdRWqhlVOWcl3d2dafPpzn+Wrv/GbSBV6ddJ6jBYM8pxXL2xy4/oun39xyNlnjjK71MUUhklvzHhS4jzsDjOy3PzQPfpAOIsQApzDlK4SEgJvDEIpTFEgHOHuXIJKNBCSdmkNvvCBSMKHUIa4ho6jqgk4nVQOPROT56A1IolCl1gKlKhyDg9Yj3UluCBN4YwDLSjKIvR6lMRrhROS0oF0BqEihNQV9juw/QspgzBomCyH0qCjKGC+rUXL0In2TkBpA02qNXghiVXQobF5YHePGzFRHMFkQHptgjUZfi5h565AuTTMypGjnMKUDjVbp9uqU2/GnJibZbZRY9jvc7cseX2rhx2PWUhiFudnODuMGVxZwy7MsvjkGnPzHZaLCeL4DodPzbBwZpZPtdr0TWBMyfB0ypwXag3y/oR3bo7oFzHqVsqeH7O/VKI/f5JLT61Sf1shlyW3+wY16eKHjp9z2yw2NagI6wBkkCf3hJtOFIjQlVS0m5pTiy36d4Zs7g3CKRDuoFjncN6hpD5Qg3Ze8MInnmWwu893v/kt8nyCBpqRpJ8ahqPAOvk7372BlTEnTy9gnSctLDu7Y4z13OqNyMwD7izee8o0CyFXlXcID9baSpK5kq9zLigzqTC671GV3HagWI2TuCodl/i8RDVqIMCWJcIKpI7RSUSRZ8S1OhqPw0HpUJGkdCWm8OhYYwpTdYwVUkdQTsVvJMqHwT3jXAjxhCFq1JHK453BQsU/FoUwTkmk1lAabFFCHE4aY2yQyqiGPb0tKU0e+jJR6NdIDTpWeFvQNntkZU7vSg2T+9Cxr2uSBPb3++T5hMiW2FGDqJOwO5pwammWw1Ly7c0Rr73jWCom/JUz6+Rfv0VT13FDg1MZ4yct3V99kc8d7fAbiynxIvSKIb+30+cZFTMeT5gMB1x67Qq3bo9I7mZsyj4bR2O6zTqrNxyrtDn7/LO8307J9q5zuF2wle9QsxkzcY1GAyqK/JAbmKn0YCiTMx2Pl2Hq+Mn1GV65PWBjf0BWZCRxHGQThUeIUHCRUocIRAb48c/96c9jV+f5+jf/gGRvD5VlNIwldY5Y1thPS377u5d4qjdiab7J3l7K9v6EUsClIeQy/qH79IFwlhAeSaxzSGORWuKKMsSzTDXlRcCSTHJyW+JadaSW4DWyVsPZwBklrMEMx+G0UCpoOWpZMchIhAxMk2WWIWzY7EIrnIoQQhLFleQzjiiOwlhNVuCswYwDMhIl0LEM2BXCbJcrCjwh31GRRk1H8LEID+VkgrdhXMMaH8RiTYkzNoy6+NA81UpgTIkUoOMIpCLLSiLjiOMQRipVYJVkc8eTuoLR7phYaPy256bpM5iUdPYiiPZJRmNmui3ceITdS2ksdbn20kXS8Sj8Gx1IL0YXbjH69Q0ymyP7fUZbkmtO8HZa8g2hWGrWWU1qZP/iOmSOspVw6FPrNM+u88kj63THkqiuuCMn9N58j3UxYXZ3k6IvSJbrgSutiqGCSK0INx8B3luQIZQNrJIKLzzHVlq0Y0V/kHLtzk1OrZ+s9kRVgJFhzzhnKnmJ0L9aXF5mb/0R5LonynPaO3eoXb6Cq4oz29mEb35vwPx8h3FhmbTalKtHyGa7WPWNH7pNxYFw6EdoQoghAUz2UdsCsPNRL4KH6/iX7Se5jqPe+8Uf9IMH42SBC977HzQh8BM1IcQrD9fxcB0/zH444PihPbSH9ofsobM8tIf2Ae1BcZb/8aNeQGUP1/GH7eE67rMHIsF/aA/t3wR7UE6Wh/bQHnj7yJ1FCPHzQogLQohLFaXSH+d7/f+EEFtCiHfue25OCPE1IcTF6v+z1fNCCPFfV+t6Swjx3Ie4jnUhxL8QQpwTQrwrhPhrH8VahBA1IcTLQog3q3X836vnjwshXqre7x9WoD+EEEn1/aXq58c+jHVUv1sJIV4XQnzlo1rDH2n3Sx//pL8IcKvLwAkgBt4EHvtjfL/PEsg33rnvuf8X8Deqx38D+H9Wj38B+OeElvMngJc+xHWsAs9Vj9vA+8BjP+m1VL+vVT2OgJeq3/+PgD9fPf8/AH+1evwfAf9D9fjPA//wQ/xM/i/APwC+Un3/E1/DH7nGn9Qb/ZAP6JPAb9/3/d8E/uYf83se+5ec5QKwWj1eJfR8AP428Bd+0Ov+GNb0m8AXP8q1EIgTXyPwJuwA+l++RsBvA5+sHuvqdeJDeO/DwO8CPwN8pXLin+gaPsjXRx2G/TCOsZ+k/evyn32oVoURzxLu6j/xtVThzxsEdp6vEU76fe/9dKLw/vc6WEf18z4w/yEs478C/q9wMCE//xGs4Y+0j9pZHijz4Xb1EysPCiFawK8D/6n3fvBRrMV7b733zxDu7i8AZ/+43/N+E0L8KWDLe//qT/J9//fYR+0s/9ocY38MdrfiPePH5T/71zEhRERwlL/vvf9fP8q1AHjv94F/QQh5ZoQQ01Go+9/rYB3Vz7vA7o/51p8GflEIcQ34NUIo9rd+wmv4QPZRO8v3gVNV5SMmJGxf/gmv4csE3jP4V/nP/o9VJeoTfAD+sw9qIsz+/3+B97z3/+VHtRYhxKIICgkIIeqEvOk9gtP8mR+yjun6/gzwjeoE/N9t3vu/6b0/7L0/Rrj+3/De/8Wf5Br+dRb7kX4RKj3vE2Ll/9sf83v9LwTO5ZIQB/8qId79XeAi8HVgrnqtAP67al1vA89/iOv4DCHEegt4o/r6hZ/0WoCngNerdbwD/OfV8yeAlwncb/8YSKrna9X3l6qfn/iQr8/nuFcN+0jW8KO+HnbwH9pD+4D2UYdhD+2h/RtjD53loT20D2gPneWhPbQPaA+d5aE9tA9oD53loT20D2gPneWhPbQPaA+d5aE9tA9oD53loT20D2j/G+y8leMctJemAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ "