This repo contains following items for the project 2 of the nano-degree:
- Python notebook file: Continuous_Control.ipynb file containing fully functional code, all code cells executed and displaying output, and all questions answered.
- A README.md markdown file with a description of your code.
- An HTML or PDF export of the project report with the name Report.html or Report.pdf.
- File with the saved model weights of the actor and critics networks being used: checkpoint_actor.pth and checkpoint_critic.pth.
To set up the environment and run the code for this project, please follow the steps below:
-
Create (and activate) a new environment with Python 3.6.
- Linux or Mac:
conda create --name drlnd python=3.6 source activate drlnd
- Windows:
conda create --name drlnd python=3.6 activate drlnd
-
Download the Unity environment needed for this project (Current project in this repository is using Mac OSX version):
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
- For AWS: To train the agent on AWS (without enabled a virtual screen), use this link to obtain the "headless" version of the environment. The agent can not be watched without enabling a virtual screen, but can be trained. (To watch the agent, one can follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)
- Clone the repository, and navigate to the
python/
folder. Then, install several dependencies.
git clone https://github.com/liuwenbindo/drlnd_continuous_control.git
cd drlnd_continuous_control/python
pip install .
- Create an IPython kernel for the
drlnd
environment.
python -m ipykernel install --user --name drlnd --display-name "drlnd"
- Before running code in a notebook, change the kernel to match the
drlnd
environment by using the drop-downKernel
menu.
In particular this project, we will work with the Reacher environment and solve it using RL models for continuous actions/controls.
In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of the agent is to maintain its position at the target location for as many time steps as possible.
The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.
For this project, can be done with two separate versions of the Unity environment:
- The first version contains a single agent.
- The second version contains 20 identical agents, each with its own copy of the environment (see here for more information).
The second version is useful for algorithms like PPO, A3C, and DDPG that use multiple (non-interacting, parallel) copies of the same agent to distribute the task of gathering experience. `
The task is episodic, and in order to solve the environment, the agent must get an average score of +30 over 100 consecutive episodes.
The environment can be downloaded from one of the links below for all operating systems
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
- For AWS: To train the agent on AWS (without enabled a virtual screen), use this link to obtain the "headless" version of the environment. The agent can not be watched without enabling a virtual screen, but can be trained. (To watch the agent, one can follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)
The notebook Continuous_Control.ipynb
contains the code to set up the agent, and run episode iteration to solve the reinforcement problem. Our solution uses a Deep Deterministic Policy Gradient approach (only standard feedforward layers) with experience replay, see this paper.
The agent, the deep Q-Network and memory buffer are implemented in the file ddpg_agent.py
. The deep learning architectures for both actor and critic are defined in model.py
.