RLBench is an ambitious large-scale benchmark and learning environment designed to facilitate research in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning. Click here for website and paper.
Contents:
- Announcements
- Install
- Running Headless
- Getting Started
- Tasks
- Task Building
- Gotchas!
- Contributing
- Acknowledgements
- Citation
- Shaped rewards added for: reach_target and take_lid_off_saucepan. Pass
shaped_rewards=True
toEnvironement
class
- Version 1.2.0 is live! Note: This release will cause code-breaking API changes for action modes.
- New instructions on headless GPU rendering here!
- New tutorial series on task creation here!
- We added a Discord channel to allow the RLBench community to help one another. Click the Discord badge above.
- RLBench has been accepted to RA-L with presentation at ICRA!
- Ability to easily swap out arms added. See here.
- Gym is now supported!
RLBench is built around CoppeliaSim v4.1.0 and PyRep.
First, install CoppeliaSim:
# set env variables
export COPPELIASIM_ROOT=${HOME}/CoppeliaSim
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COPPELIASIM_ROOT
export QT_QPA_PLATFORM_PLUGIN_PATH=$COPPELIASIM_ROOT
wget https://downloads.coppeliarobotics.com/V4_1_0/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz
mkdir -p $COPPELIASIM_ROOT && tar -xf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz -C $COPPELIASIM_ROOT --strip-components 1
rm -rf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz
To install the RLBench python package:
pip install git+https://github.com/stepjam/RLBench.git
And that's it!
If you are running on a machine without display (i.e. Cloud VMs, compute clusters), you can refer to the following guide to run RLBench headlessly with rendering.
First, configure your X config. This should only be done once to set up.
sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
echo -e 'Section "ServerFlags"\n\tOption "MaxClients" "2048"\nEndSection\n' \
| sudo tee /etc/X11/xorg.conf.d/99-maxclients.conf
Leave out --use-display-device=None
if the GPU is headless, i.e. if it has no display outputs.
Then, whenever you want to run RLBench, spin up X.
# nohup and disown is important for the X server to keep running in the background
sudo nohup X :99 & disown
Test if your display works using glxgears.
DISPLAY=:99 glxgears
If you have multiple GPUs, you can select your GPU by doing the following.
DISPLAY=:99.<gpu_id> glxgears
To spin up X with non-sudo users, edit file '/etc/X11/Xwrapper.config' and replace line:
allowed_users=console
with lines:
allowed_users=anybody
needs_root_rights=yes
If the file does not exist already, you can create it.
The benchmark places particular emphasis on few-shot learning and meta learning due to breadth of tasks available, though it can be used in numerous ways. Before using RLBench, checkout the Gotchas section.
We have created splits of tasks called 'Task Sets', which consist of a
collection of X training tasks and 5 tests tasks. Here X can be 10, 25, 50, or 95.
For example, to work on the task set with 10 training tasks, we import FS10_V1
:
import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import FS10_V1
action_mode = MoveArmThenGripper(
arm_action_mode=JointVelocity(),
gripper_action_mode=Discrete()
)
env = Environment(action_mode)
env.launch()
train_tasks = FS10_V1['train']
test_tasks = FS10_V1['test']
task_to_train = np.random.choice(train_tasks, 1)[0]
task = env.get_task(task_to_train)
task.sample_variation() # random variation
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))
A full example can be seen in examples/few_shot_rl.py.
import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import ReachTarget
action_mode = MoveArmThenGripper(
arm_action_mode=JointVelocity(),
gripper_action_mode=Discrete()
)
env = Environment(action_mode)
env.launch()
task = env.get_task(ReachTarget)
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))
A full example can be seen in examples/single_task_rl.py. If you would like to bootstrap from demonstrations, then take a look at examples/single_task_rl_with_demos.py.
import numpy as np
from rlbench import Environment
from rlbench import RandomizeEvery
from rlbench import VisualRandomizationConfig
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.tasks import OpenDoor
# We will borrow some from the tests dir
rand_config = VisualRandomizationConfig(
image_directory='../tests/unit/assets/textures')
action_mode = MoveArmThenGripper(
arm_action_mode=JointVelocity(),
gripper_action_mode=Discrete()
)
env = Environment(
action_mode, randomize_every=RandomizeEvery.EPISODE,
frequency=1, visual_randomization_config=rand_config)
env.launch()
task = env.get_task(OpenDoor)
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))
A full example can be seen in examples/single_task_rl_domain_randomization.py.
import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import ReachTarget
# To use 'saved' demos, set the path below
DATASET = 'PATH/TO/YOUR/DATASET'
action_mode = MoveArmThenGripper(
arm_action_mode=JointVelocity(),
gripper_action_mode=Discrete()
)
env = Environment(action_mode, DATASET)
env.launch()
task = env.get_task(ReachTarget)
demos = task.get_demos(2) # -> List[List[Observation]]
demos = np.array(demos).flatten()
batch = np.random.choice(demos, replace=False)
batch_images = [obs.left_shoulder_rgb for obs in batch]
predicted_actions = predict_action(batch_images)
ground_truth_actions = [obs.joint_velocities for obs in batch]
loss = behaviour_cloning_loss(ground_truth_actions, predicted_actions)
A full example can be seen in examples/imitation_learning.py.
We have created splits of tasks called 'Task Sets', which consist of a
collection of X training tasks. Here X can be 15, 30, 55, or 100.
For example, to work on the task set with 15 training tasks, we import MT15_V1
:
import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import MT15_V1
action_mode = MoveArmThenGripper(
arm_action_mode=JointVelocity(),
gripper_action_mode=Discrete()
)
env = Environment(action_mode)
env.launch()
train_tasks = MT15_V1['train']
task_to_train = np.random.choice(train_tasks, 1)[0]
task = env.get_task(task_to_train)
task.sample_variation() # random variation
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))
A full example can be seen in examples/multi_task_learning.py.
RLBench is Gym compatible! Ensure you have gym installed (pip3 install gym
).
Simply select your task of interest from rlbench/tasks/, and then load the task by using the task name (e.g. 'reach_target') followed by the observation mode: 'state' or 'vision'.
import gym
import rlbench
env = gym.make('reach_target-state-v0')
# Alternatively, for vision:
# env = gym.make('reach_target-vision-v0')
training_steps = 120
episode_length = 40
for i in range(training_steps):
if i % episode_length == 0:
print('Reset Episode')
obs = env.reset()
obs, reward, terminate, _ = env.step(env.action_space.sample())
env.render() # Note: rendering increases step time.
print('Done')
env.close()
A full example can be seen in examples/rlbench_gym.py.
The default Franka Panda Arm can be swapped out for another. This can be useful for those who have custom tasks or want to perform sim-to-real experiments on the tasks. However, if you swap out the arm, then we can't guarantee that the task will be solvable. For example, the Mico arm has a very small workspace in comparison to the Franka.
For benchmarking, the arm should remain as the Franka Panda.
Currently supported arms:
- Franka Panda arm with Franka gripper
(franka)
- Mico arm with Mico gripper
(mico)
- Jaco arm with 3-finger Jaco gripper
(jaco)
- Sawyer arm with Baxter gripper
(sawyer)
- UR5 arm with Robotiq 85 gripper
(ur5)
You can then swap out the arm using robot_configuration
:
env = Environment(action_mode=action_mode, robot_setup='sawyer')
A full example (using the Sawyer) can be seen in examples/swap_arm.py.
Don't see the arm that you want to use? Your first step is to make sure it is in PyRep, and if not, then you can follow the instructions for importing new arm on the PyRep GitHub page. After that, feel free to open an issue and we can being it in to RLBench for you.
To see a full list of all tasks, see here.
To see gifs of each of the tasks, see here.
The task building tool is the interface for users who wish to create new tasks to be added to the RLBench task repository. Each task has 2 associated files: a V-REP model file (.ttm), which holds all of the scene information and demo waypoints, and a python (.py) file, which is responsible for wiring the scene objects to the RLBench backend, applying variations, defining success criteria, and adding other more complex task behaviours.
Video tutorial series here!
In-depth text tutorials:
-
Using low-dimensional task observations (rather than images): RLBench was designed to be challenging, putting emphasis on vision rather than toy-based low dimensional inputs. Although each task does supply a low-dimensional output this should be used with extreme caution!
- Why? Imagine you are training a reinforcement learning agent to pick up a block; halfway through training, the block slips from the gripper and falls of the table. These low-dimensional values will now be out of distribution. I.e. RLBench does not safeguard against objects going out of the workspace. This issue does not arise when using image-based observations.
-
Using non-standard image size: RLBench by default uses image observation sizes of 128x128. When using an alternative size, be aware that you may need to collect your saved demonstrations again.
- Why? If we instead specify a 64x64 image observation size to the
ObservationConfig
then the scene cameras will now render to that size. However, the saved demos on disk will now be resized to be 64x64. This resizing will of course mean that small artifacts may be present in stored demos that may not be present in the 'live' observations from the scene. Instead, prefer to re-collect demos using the image observation sized you plan to use in the 'live' environment.
- Why? If we instead specify a 64x64 image observation size to the
New tasks using our task building tool, in addition to bug fixes, are very welcome! When building your task, please ensure that you run the task validator in the task building tool.
A full contribution guide is coming soon!
Models were supplied from turbosquid.com, cgtrader.com, free3d.com, thingiverse.com, and cadnav.com.
@article{james2019rlbench,
title={RLBench: The Robot Learning Benchmark \& Learning Environment},
author={James, Stephen and Ma, Zicong and Rovick Arrojo, David and Davison, Andrew J.},
journal={IEEE Robotics and Automation Letters},
year={2020}
}