POPGym is designed to benchmark memory in deep reinforcement learning. It contains a set of environments and a collection of memory model baselines. The full paper is available on OpenReview.
Please see the documentation for advanced installation instructions and examples. The environment quickstart will get you up and running in a few minutes.
# Install base environments, only requires numpy and gymnasium
pip install popgym
# Also include navigation environments, which require mazelib
# NOTE: navigation envs require python <3.12 due to mazelib not supporting 3.12
pip install "popgym[navigation]"
# Install memory baselines w/ RLlib
pip install "popgym[baselines]"
import popgym
from popgym.wrappers import PreviousAction, Antialias, Flatten, DiscreteAction
env = popgym.envs.position_only_cartpole.PositionOnlyCartPoleEasy()
print(env.reset(seed=0))
wrapped = DiscreteAction(Flatten(PreviousAction(env))) # Append prev action to obs, flatten obs/action spaces, then map the multidiscrete action space to a single discrete action for Q learning
print(wrapped.reset(seed=0))
POPGym contains Partially Observable Markov Decision Process (POMDP) environments following the Gymnasium interface. POPGym environments have minimal dependencies and fast enough to solve on a laptop CPU in less than a day. We provide the following environments:
Environment | Tags | Temporal Ordering | Colab FPS | Macbook Air (2020) FPS |
---|---|---|---|---|
Battleship | Game | None | 117,158 | 235,402 |
Concentration | Game | Weak | 47,515 | 157,217 |
Higher Lower | Game, Noisy | None | 24,312 | 76,903 |
Labyrinth Escape | Navigation | Strong | 1,399 | 41,122 |
Labyrinth Explore | Navigation | Strong | 1,374 | 30,611 |
Minesweeper | Game | None | 8,434 | 32,003 |
Multiarmed Bandit | Noisy | None | 48,751 | 469,325 |
Autoencode | Diagnostic | Strong | 121,756 | 251,997 |
Count Recall | Diagnostic, Noisy | None | 16,799 | 50,311 |
Repeat First | Diagnostic | None | 23,895 | 155,201 |
Repeat Previous | Diagnostic | Strong | 50,349 | 136,392 |
Position Only Cartpole | Control | Strong | 73,622 | 218,446 |
Velocity Only Cartpole | Control | Strong | 69,476 | 214,352 |
Noisy Position Only Cartpole | Control, Noisy | Strong | 6,269 | 66,891 |
Position Only Pendulum | Control | Strong | 8,168 | 26,358 |
Noisy Position Only Pendulum | Control, Noisy | Strong | 6,808 | 20,090 |
Feel free to rerun this benchmark using this colab notebook.
POPGym baselines implements recurrent and memory model in an efficient manner. POPGym baselines is implemented on top of rllib
using their custom model API. We provide the following baselines:
- MLP
- Positional MLP
- Framestacking (Paper)
- Temporal Convolution Networks (Paper)
- Elman Networks (Paper)
- Long Short-Term Memory (Paper)
- Gated Recurrent Units (Paper)
- Independently Recurrent Neural Networks (Paper)
- Fast Autoregressive Transformers (Paper)
- Fast Weight Programmers (Paper)
- Legendre Memory Units (Paper)
- Diagonal State Space Models (Paper)
- Differentiable Neural Computers (Paper)
The leaderboard is available at paperswithcode.
Follow style and ensure tests pass
pip install pre-commit
pre-commit install
pytest popgym/tests
@inproceedings{
morad2023popgym,
title={{POPG}ym: Benchmarking Partially Observable Reinforcement Learning},
author={Steven Morad and Ryan Kortvelesy and Matteo Bettini and Stephan Liwicki and Amanda Prorok},
booktitle={The Eleventh International Conference on Learning Representations},
year={2023},
url={https://openreview.net/forum?id=chDrutUTs0K}
}