-
Notifications
You must be signed in to change notification settings - Fork 16
Hasta la vista, baby
Matheus Vieira Portela edited this page Nov 29, 2016
·
2 revisions
Your use case may involve higher stakes, such as using real robots instead of simulated ones. In this case, using ROS is a nice idea, since several drivers, communication protocols, and middleware is already developed and ready to use.
Referring to A brave new world, ROS could nicely fit the example simulation loop by communicating with the real robots instead of simulating an environment. We only need to get or estimate the state somehow and send the selected actions through ROS to the robots.
from multiagentrl import core
from multiagentrl import messages
class ExampleExperiment(core.BaseExperiment):
def __init__(self, learn_games, test_games):
super(ExampleExperiment, self).__init__(
learn_games=learn_games,
test_games=test_games)
self.ros = ROSCommunicationChannel()
def execute_game(self):
# Send first state before start learning
[agent.send_state() for agent in self.agents]
while True:
# Receive an action for the current state
actions = [agent.receive_action() for agent in self.agents]
# Communicate with ROS
self.ros.send_actions(actions)
self.ros.receive_states()
# Update state to learn from the received reward
[agent.send_state() for agent in self.agents]
# Get reward when executing the action and reaching the new state
[agent.send_reward() for agent in self.agents]
Now, it would be possible to see the action in real-time in the real world!