Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting the top_site and bottom_site parameters #21

Closed
gsp-27 opened this issue Feb 20, 2019 · 6 comments
Closed

Getting the top_site and bottom_site parameters #21

gsp-27 opened this issue Feb 20, 2019 · 6 comments

Comments

@gsp-27
Copy link

gsp-27 commented Feb 20, 2019

Hi,

I am trying to load different objects in your amazing environment to train a general purpose grasper. For that I have to provide different xml's for each object. In these xml's there are "site" parameters which specify the topmost and bottommost part of the object, from the center of the object. Also looking at those values they are only computed in z-direction. But when I use the models provided in meshes of this repository and try to compute this top-site and bottom-site, I do not get the same answer as that written in the xml files. So I just find out the mean of the stl file, this is my center and then subtract this center so now everything lies between 0 and 1 and then I just find out the min-max in the z-direction to get the top site and bottom site.

Maybe I am doing something wrong, any help regarding this topic would be great.

Thanks

@jirenz
Copy link
Contributor

jirenz commented Feb 20, 2019

The 3-D coordinate of the site matters, and the site references body coordinates. Because when we are putting the object on the table (say on table_top = np.array([0.5, 0.5, 0]), we do obj.pos = table_top - obj.bottom_offset. When you have a mesh object, where the bottom should go depends on your mesh.

@anchit should be able to tell you how the sites for stl files are generated.

@gsp-27
Copy link
Author

gsp-27 commented Feb 20, 2019

Yes. But I am not sure how these values are computed, because when I compute them using the top-most and bottom-most point from the mesh, I do not get the same value as that written in the xml file.

@anchit
Copy link

anchit commented Feb 20, 2019

The top and bottom sites for the mesh objects have not been computed in a principled way. We have just approximately tuned them to what works well. Currently these are only used to initialize at the right place and hence being accurate isn't necessary.

@gsp-27
Copy link
Author

gsp-27 commented Feb 21, 2019

I see thanks for the info.

But is the method that I described above the intended method to compute these parameters, because I am trying to load random shape net objects in the scene and hence it would be beneficial to have these automatically computed.

@jirenz
Copy link
Contributor

jirenz commented Feb 21, 2019

I think it would work. The objects might tilt a bit because the x, y coordinates are not necessarily correct but what you described is what I would do for shapeNET as well.

@gsp-27
Copy link
Author

gsp-27 commented Feb 21, 2019

Okay thanks a lot

@gsp-27 gsp-27 closed this as completed Feb 21, 2019
anchit pushed a commit that referenced this issue Apr 15, 2019
Baxter robot & some sample environments
cremebrule pushed a commit that referenced this issue Nov 14, 2022
@yukezhu yukezhu mentioned this issue Dec 1, 2022
yukezhu added a commit that referenced this issue Dec 1, 2022
# robosuite 1.4.0 Release Notes
- Highlights
- New Features
- Improvements
- Critical Bug Fixes
- Other Bug Fixes

# Highlights
This release of robosuite refactors our backend to leverage DeepMind's new [mujoco](https://github.com/deepmind/mujoco) bindings. Below, we discuss the key details of this refactoring:

## Installation
Now, installation has become much simpler, with mujoco being directly installed on Linux or Mac via `pip install mujoco`. Importing mujoco is now done via `import mujoco` instead of `import mujoco_py`

## Rendering
The new DeepMind mujoco bindings do not ship with an onscreen renderer. As a result, we've implented an [OpenCV renderer](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/utils/opencv_renderer.py), which provides most of the core functionality from the original mujoco renderer, but has a few limitations (most significantly, no glfw keyboard callbacks and no ability to move the free camera).

# Improvements
The following briefly describes other changes that improve on the pre-existing structure. This is not an exhaustive list, but a highlighted list of changes.

- Standardize end-effector frame inference (#25). Now, all end-effector frames are correctly inferred from raw robot XMLs and take into account arbitrary relative orientations between robot arm link frames and gripper link frames.

- Improved robot textures (#27). With added support from DeepMind's mujoco bindings for obj texture files, all robots are now natively rendered with more accurate texture maps.

- Revamped macros (#30). Macros now references a single macro file that can be arbitrarily specified by the user.

- Improved method for specifying GPU ID (#29). The new logic is as follows:
  1. If `render_device_gpu_id=-1`, `MUJOCO_EGL_DEVICE_ID` and `CUDA_VISIBLE_DEVICES` are not set, we either choose the first available device (usually `0`) if `macros.MUJOCO_GPU_RENDERING` is `True`, otherwise use CPU;
  2. `CUDA_VISIBLE_DEVICES` or `MUJOCO_EGL_DEVICE_ID` are set, we make sure that they dominate over programmatically defined GPU device id.
  3. If `CUDA_VISIBLE_DEVICES` and `MUJOCO_EGL_DEVICE_ID` are both set, then we use `MUJOCO_EGL_DEVICE_ID` and make sure it is defined in `CUDA_VISIBLE_DEVICES`

- robosuite docs updated

- Add new papers


# Critical Bug Fixes
- Fix Sawyer IK instability bug (#25)


# Other Bug Fixes
- Fix iGibson renderer bug (#21)


-------

## Contributor Spotlight
We would like to introduce the newest members of our robosuite core team, all of whom have contributed significantly to this release!
@awesome-aj0123
@snasiriany
@zhuyifengzju
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants