The pepper social scenarios is implemented using ml-agents. It is still under development. This repo is provided for the Paper Social Behavior Learning with Realistic Reward Shaping. Please do not hasitate to contact me if there are issues, you can let me know by posting them in the issues section.
Tested Unity version: 2018.1.0b13 (beta) Tested Unity ML-Agents version: 0.3.1b
Pepper robot approaches people: This environment trains pepper robot to approach a group from different angles.
Visualization of personal, social and public spaces of different agents and a sample of image-based observation
Approaching from the left and right side by taking care of personal, social and public space (red circles represent the personal spaces of the agents). Learned policy can enable robot to approach from any point in the space.:
-
The TensorflowSharp plugins folder was omitted from this project due to the massive file sizes. You will need to import this set of Unity plugins yourself. You can download the TensorFlowSharp plugin as a Unity package here.
-
We strongly recommend users to get familiar with Unity ML-agent.
-
We recommend using a python virtual environment to manage Python dependencies. For this we recommend using Anaconda, a powerful virtual environment and package management tool.
-
The Unity game engine is required. Linux installation download link
-
(Optional) Vision module can be found here.
- Inside of
ml-agents/python/
directory runconda create -n myenv python=3.6
. - Activate the virtual environment by running
source activate myenv
- Install requirements from
requirements.txt
by runningpip install -r requirements.txt
- If you lack grpc dependences after installing using
requirements.txt
, please install the dependence usingpip install grpcio
.
- Set scripting runtime version to
.NET 4.x Equivalent
inside File-> Build Setting-> PlayerSettings -> Other Settings -> Scripting Runtime Version. - Set
ENABLE_TENSORFLOW
inside File-> Build Setting-> PlayerSettings -> Other Settings -> Scripting Define Symbols. - Make sure that the relevant
Brain
s are set to external in the inspector.
- Use
Unity Editor
to open the project folder. Then useCtrl+o
to open scene file by following the pathPepperSocial/Assets/Scenarios/PepperSocial/PepperSocial.unity
.
- Go to File -> Build Settings.
- Tick Headless mode box.
- Set Target platform to Linux (x86_64 build). This will create two files:
<environmentName>_Data/
and <environmentName>.x86_64
We strongly recomend to move these files inside an environments/
directory inside of the ml-agents python/
directory. Such that we get:
python/environments/<environmentName>_Data/
and python/environments/<environmentName>.x86_64
Inside of the ml-agents/python/
directory, run the command:
python learn.py environments/<environmentName>.x86_64 --train
- Twitter: @alexyuangao
- Blog: yuangao.ai
- Twitter:
We use branches to keep the experiments clean: The following table shows configerations and their corresponding branches.
Configeratures | Branch |
---|---|
Vector + LSTM (Baseline) | [Link] |
CameraOnly + SAEV + FF | [Link] |
CameraOnly + SAEV + LSTM | [Link] |
CameraOnly + conv + FF | [Link] |
CameraOnly + conv + LSTM | [Link] |
CameraSpeed + SAEV + FF | [Link] |
CameraSpeed + SAEV + LSTM | [Link] |