-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting the top_site and bottom_site parameters #21
Comments
The 3-D coordinate of the site matters, and the site references body coordinates. Because when we are putting the object on the table (say on @anchit should be able to tell you how the sites for stl files are generated. |
Yes. But I am not sure how these values are computed, because when I compute them using the top-most and bottom-most point from the mesh, I do not get the same value as that written in the xml file. |
The top and bottom sites for the mesh objects have not been computed in a principled way. We have just approximately tuned them to what works well. Currently these are only used to initialize at the right place and hence being accurate isn't necessary. |
I see thanks for the info. But is the method that I described above the intended method to compute these parameters, because I am trying to load random shape net objects in the scene and hence it would be beneficial to have these automatically computed. |
I think it would work. The objects might tilt a bit because the x, y coordinates are not necessarily correct but what you described is what I would do for shapeNET as well. |
Okay thanks a lot |
Baxter robot & some sample environments
# robosuite 1.4.0 Release Notes - Highlights - New Features - Improvements - Critical Bug Fixes - Other Bug Fixes # Highlights This release of robosuite refactors our backend to leverage DeepMind's new [mujoco](https://github.com/deepmind/mujoco) bindings. Below, we discuss the key details of this refactoring: ## Installation Now, installation has become much simpler, with mujoco being directly installed on Linux or Mac via `pip install mujoco`. Importing mujoco is now done via `import mujoco` instead of `import mujoco_py` ## Rendering The new DeepMind mujoco bindings do not ship with an onscreen renderer. As a result, we've implented an [OpenCV renderer](https://github.com/ARISE-Initiative/robosuite/blob/master/robosuite/utils/opencv_renderer.py), which provides most of the core functionality from the original mujoco renderer, but has a few limitations (most significantly, no glfw keyboard callbacks and no ability to move the free camera). # Improvements The following briefly describes other changes that improve on the pre-existing structure. This is not an exhaustive list, but a highlighted list of changes. - Standardize end-effector frame inference (#25). Now, all end-effector frames are correctly inferred from raw robot XMLs and take into account arbitrary relative orientations between robot arm link frames and gripper link frames. - Improved robot textures (#27). With added support from DeepMind's mujoco bindings for obj texture files, all robots are now natively rendered with more accurate texture maps. - Revamped macros (#30). Macros now references a single macro file that can be arbitrarily specified by the user. - Improved method for specifying GPU ID (#29). The new logic is as follows: 1. If `render_device_gpu_id=-1`, `MUJOCO_EGL_DEVICE_ID` and `CUDA_VISIBLE_DEVICES` are not set, we either choose the first available device (usually `0`) if `macros.MUJOCO_GPU_RENDERING` is `True`, otherwise use CPU; 2. `CUDA_VISIBLE_DEVICES` or `MUJOCO_EGL_DEVICE_ID` are set, we make sure that they dominate over programmatically defined GPU device id. 3. If `CUDA_VISIBLE_DEVICES` and `MUJOCO_EGL_DEVICE_ID` are both set, then we use `MUJOCO_EGL_DEVICE_ID` and make sure it is defined in `CUDA_VISIBLE_DEVICES` - robosuite docs updated - Add new papers # Critical Bug Fixes - Fix Sawyer IK instability bug (#25) # Other Bug Fixes - Fix iGibson renderer bug (#21) ------- ## Contributor Spotlight We would like to introduce the newest members of our robosuite core team, all of whom have contributed significantly to this release! @awesome-aj0123 @snasiriany @zhuyifengzju
Hi,
I am trying to load different objects in your amazing environment to train a general purpose grasper. For that I have to provide different xml's for each object. In these xml's there are "site" parameters which specify the topmost and bottommost part of the object, from the center of the object. Also looking at those values they are only computed in z-direction. But when I use the models provided in meshes of this repository and try to compute this top-site and bottom-site, I do not get the same answer as that written in the xml files. So I just find out the mean of the stl file, this is my center and then subtract this center so now everything lies between 0 and 1 and then I just find out the min-max in the z-direction to get the top site and bottom site.
Maybe I am doing something wrong, any help regarding this topic would be great.
Thanks
The text was updated successfully, but these errors were encountered: