Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement multiple robot agents #25

Open
why0504 opened this issue Nov 8, 2023 · 3 comments
Open

Implement multiple robot agents #25

why0504 opened this issue Nov 8, 2023 · 3 comments

Comments

@why0504
Copy link

why0504 commented Nov 8, 2023

hi,
I like your work very much,
I currently have an idea to do collaborative tasks among multiple robot agents.So I would like to ask you, how to implement multiple robot agents in the environment based on this github repository? Do you have any suggestions for this?

@Shuijing725
Copy link
Owner

See #22

@why0504
Copy link
Author

why0504 commented Nov 20, 2023

Hi,
I've seen https://github.com/Shuijing725/CrowdNav_DSRNN/issues/22
If I want to train two robots, then after adding the second robot instance, as follows:
Under the two programs crowd_sim/envs/crowd_sim.py and crowd_sim/envs/crowd_sim_dict.py

rob_RL = Robot(config, 'robot') rob2_RL = Robot(config,'robot2') self.set_robot(rob_RL) self.set_robot(rob2_RL)
ob['robot_node2'] = self.robot2.get_full_state_list_noV()

I assume that both robots use the same reinforcement learning strategy, and the episode ends with both robots reaching the same target point. Then, referring to the training of the first robot, what procedures do I need to modify in the project file? Do I just need to modify the environment's code? For example, the related functions in crowd_sim_dict.py.
Secondly, does the render() function in crowd_sim.py also need to be modified?
Do I need to add code to train.py?
Sorry to bother you.
Thank you so much!

@Shuijing725
Copy link
Owner

Shuijing725 commented Nov 20, 2023

Yes, besides the gym environment, multiple modifications are needed. For example, you probably need to modify the main scripts including train.py and test.py.
Also, the code for network and ppo in pytorchBaselines folder needs to be adapted. For example, dsrnn network and RL replay buffer are only designed for single robot scenarios.

Besides changing our repo, another way to achieve your purpose is to look at open-source implementations of other multi-agent social navigation papers. For example, you can search for works that used environments like this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants