Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for the Franka Panda robot #77

Closed
gal-leibovich opened this issue Apr 19, 2020 · 5 comments
Closed

Adding support for the Franka Panda robot #77

gal-leibovich opened this issue Apr 19, 2020 · 5 comments
Labels
help wanted Extra attention is needed

Comments

@gal-leibovich
Copy link

Hi,

Thanks a lot for this great open-source benchmark. Great work!

I was wondering are there any (hopefully short-term) plans to add support for the Franka Panda robot as part of the benchmark? This robot is becoming pretty common and standard in many libraries.

Thanks,
Gal

@ryanjulian ryanjulian added the v2 ideas for a new major version of the benchmark label Apr 20, 2020
@ryanjulian
Copy link
Contributor

Hi @gal-leibovich --

tl;dr: We don't have any plans for changing the robot model or adding a new one, but thanks for raising the idea.

Standardization is core to the purpose of the benchmark, so introducing a second robot model (simultaneously with Sawyer) would really compromise that mission.

MuJoCo's physics simulation isn't very reflective of reality, and so our choice of Sawyer robot model essentially just provides a set of joint kinematics, a very generic parallel gripper geometry, and some pretty skins to show in the 3D visualization. Note that MuJoCo actually models all the joints as cylinders, so the Sawyer-specific polygons are really just for show.

Given how similar the kinematics of Sawyer and Franka Panda are, I don't expect any conclusions about real-world performance you might draw from experiments with Metaworld to be significantly different when applied to the Franka Panda.

I think changing the robot model is an issue to consider for future major versions of the benchmark, mainly for aesthetic reasons: though popular, Sawyer is quickly falling out of favor for new installations, due to Rethink's closure and sale to a German industrial automation conglomerate. It's important that the benchmark look and feel like a real robotics lab.

I'm going to keep this issue open so we can consider it as we plan for the future of metaworld. If you'd be interested in donating effort to using Franka models in a new version of Metaworld, let us know! It would definitely influence our decsision.

@gal-leibovich
Copy link
Author

Hi @ryanjulian,

Thanks a lot for the detailed answer, and sorry for taking me so long to write back.

Standardization and the effort required for supporting a second robot model totally makes sense.

Have you ever had a chance to try and do the sim-2-real transfer from one simulated robot (e.g. Sawyer), to a different real world robot model?
It makes sense that given how similar these behave in simulation it shouldn't matter that much when actually doing the transfer, but on the other hand, I did expect some effect for the different characteristics of the different models (e.g. friction, damping, mass, control range and others) to have some effect on the transfer to the real world. What do you think?

@ryanjulian
Copy link
Contributor

@gal-leibovich certainly Metaworld would not be a suitable sim environment for sim2real transfer to a real Franka Panda -- I apologize, I didn't mean to imply otherwise.

I merely meant that if you trained an algorithm on a metaworld environment and got some level of performance, that you would expect that algorithm to get about the same performance on a (simulated) Franka environment with the same task, such as opening a door.

For sim2real training you will want a simulated model of your actual robot.

In my own research, I've found that MuJoCo is pretty bad for simulating real-world dynamics (e.g. if you are using joint effort or velocity commands), but pretty good if you use joint-position space control. I think that dynamics randomization could likely overcome the first issue, but have not tried.

Keep in mind that this is a benchmark intended for comparing the performance of algorithms, not a general-purpose environment library. There are a number of unique software challenges that come with trying to setup a parallel simulated/real robot environment, which we don't seek to address at all. For better examples of how to do that, I recommend you take a look at rlworkgroup/gym-sawyer or some of the resources linked here: ARISE-Initiative/robosuite#43

You are welcome to fork/copy/customize metaworld as long as you follow the LICENSE. The most useful parts for a Franka port are probably the object models and reward functions.

@gal-leibovich
Copy link
Author

Yep, that makes more sense. Thanks again. And thanks for the references, I will definitely check those out.

@ryanjulian ryanjulian added help wanted Extra attention is needed and removed v2 ideas for a new major version of the benchmark labels Jul 14, 2020
@ryanjulian
Copy link
Contributor

@gal-leibovich We're unlikely to add this to this incarnation of the benchmark, so I'm going to close this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants