The code is written in Python 3 and builds on Tensorflow. Many of the provided reinforcement learning environments require the Mujoco physics engine. Overall the code was developed under consideration of modularity and computational efficiency. Many components of the Meta-RL algorithm are parallelized either using either MPI or Tensorflow in order to ensure efficient use of all CPU cores.
The provided code can be either run in A) docker container provided by us or B) using python on your local machine. The latter requires multiple installation steps in order to setup dependencies.
Ensure that you have a working MPI implementation (see here for more instructions).
For Ubuntu you can install MPI through the package manager:
sudo apt-get install libopenmpi-dev
If not done yet, install anaconda by following the instructions here.
Then reate a anaconda environment, activate it and install the requirements in requirements.txt
.
conda env create -f environment.yml
For running the majority of the provided Meta-RL environments, the Mujoco physics engine as well as a corresponding python wrapper are required. For setting up Mujoco and mujoco-py, please follow the instructions here.