Skip to content
Anurag Koul edited this page Jul 4, 2019 · 25 revisions

Basics

Let's begin by importing the basic packages

>>> import gym
>>> import ma_gym 

We have registered all the new multi agent environments

>>> env = gym.make('CrossOver-v0')

How many agents does this environment has?

>>> env.n_agents
>>> 2

We get a list of action_space of each agent.

>>> env.action_space
>>> [Discrete(5), Discrete(5)]

Following samples an action for each agent ( much like open-ai gym)

>>> env.action_space.sample()
>>> [0, 2]
>>> env.reset()
>>> [[1.0, 0.375, 0.0], [1.0, 0.75, 0.0]]

Let's step into the environment with a random action

>>> obs_n, reward_n, done_n, info = env.step(env.action_space.sample())
>>> obs_n
>>> [[1.0, 0.375, 0.01], [1.0, 0.75, 0.01]]
>>> reward_n
>>> [0, 0]

An episode is considered to be done when all agents die.

>>> episode_terminate = all(done_n)

Also, team reward is simply sum of all local rewards

>>> team_reward = sum(reward_n)

Customizing an environment

gym.envs.register(
    id='MyCrossOver-v0',
    entry_point='gym.envs.classic_control:MountainCarEnv',
    max_episode_steps=250,      # MountainCar-v0 uses 200
    reward_threshold=-110.0,
)
env = gym.make('MyCrossOver-v0')

For more usage details , refer to : https://github.com/koulanurag/ma-gym/blob/master/ma_gym/__init__.py

Monitoring

Please note that the following Monitor package is imported from ma_gym package

from ma_gym.wrappers import Monitor
env = gym.make('CrossOver-v0')
env = Monitor(env, directory='recordings', force=True)

This helps in saving video files in the recordings folder

Clone this wiki locally