Skip to content

Repository for Deep Reinforcement Learning Nano-Degree project #2: Continuous Control.

Notifications You must be signed in to change notification settings

liuwenbindo/drlnd_continuous_control

Repository files navigation

Project 2: Continuous Control

Summary

(Edit 08/26/2021)

This repo contains following items for the project 2 of the nano-degree:

  1. Python notebook file: Continuous_Control.ipynb file containing fully functional code, all code cells executed and displaying output, and all questions answered.
  2. A README.md markdown file with a description of your code.
  3. An HTML or PDF export of the project report with the name Report.html or Report.pdf.
  4. File with the saved model weights of the actor and critics networks being used: checkpoint_actor.pth and checkpoint_critic.pth.

Dependencies

(Edit 08/26/2021)

To set up the environment and run the code for this project, please follow the steps below:

  1. Create (and activate) a new environment with Python 3.6.

    • Linux or Mac:
    conda create --name drlnd python=3.6
    source activate drlnd
    • Windows:
    conda create --name drlnd python=3.6 
    activate drlnd
  2. Download the Unity environment needed for this project (Current project in this repository is using Mac OSX version):

  1. Clone the repository, and navigate to the python/ folder. Then, install several dependencies.
git clone https://github.com/liuwenbindo/drlnd_continuous_control.git
cd drlnd_continuous_control/python
pip install .
  1. Create an IPython kernel for the drlnd environment.
python -m ipykernel install --user --name drlnd --display-name "drlnd"
  1. Before running code in a notebook, change the kernel to match the drlnd environment by using the drop-down Kernel menu.

Introduction

In particular this project, we will work with the Reacher environment and solve it using RL models for continuous actions/controls.

Trained Agent

In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of the agent is to maintain its position at the target location for as many time steps as possible.

The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.

Distributed Training

For this project, can be done with two separate versions of the Unity environment:

  • The first version contains a single agent.
  • The second version contains 20 identical agents, each with its own copy of the environment (see here for more information).

The second version is useful for algorithms like PPO, A3C, and DDPG that use multiple (non-interacting, parallel) copies of the same agent to distribute the task of gathering experience. `

Solving the Environment

The task is episodic, and in order to solve the environment, the agent must get an average score of +30 over 100 consecutive episodes.

Setting up the environment

The environment can be downloaded from one of the links below for all operating systems

Approach and solution

The notebook Continuous_Control.ipynb contains the code to set up the agent, and run episode iteration to solve the reinforcement problem. Our solution uses a Deep Deterministic Policy Gradient approach (only standard feedforward layers) with experience replay, see this paper.

The agent, the deep Q-Network and memory buffer are implemented in the file ddpg_agent.py. The deep learning architectures for both actor and critic are defined in model.py.

About

Repository for Deep Reinforcement Learning Nano-Degree project #2: Continuous Control.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published