-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to know the specific global location for teleport? #806
Comments
Hi @ruinianxu, thanks for the kind words 😄
Check this out: It move the agent to the closest position to an object, where one can then obtain the RGB image frame with
I might not be understanding this question correctly. The demo (ai2thor.allenai.org/demo) shows all of the scenes that appear in AI2-THOR, along with all the objects and their locations. It also lets you test out a few domain randomizations. What are you looking for beyond this? |
@mattdeitke Besides the questions about robots, I also have other questions. Firstly, the documentation in the website is comprehensive but there isn't a lot of tutorials provided. I wonder if where I can find some starting examples. Secondly, is there a way to augment the object spawned in the scene? Like change its texture, material or color? Thank you so much for your help again. |
Hmm.. this is a bit hard to do from Python. I'd probably suggest opening up AI2-THOR in Unity, and panning around the scenes there. To open the project in Unity, download the Unity Editor version 2019.4.20 LTS for OSX (Linux Editor is currently in Beta) from Unity Download Archive. Clone this repo locally. Then in Unity, open up the From the explorer window in Unity, type in a scene (e.g., FloorPlan1) and double click it. That will then open up the scene like so:
Not yet, but many are in the works. Are there any in particular that you'd like to see?
Yes! Check out: https://ai2thor.allenai.org/ithor/documentation/objects/domain-randomization. It is also available to test on the demo. |
@mattdeitke |
@mattdeitke Therefore, I wonder instead of checking and recording the corresponding position and rotation for each interested object in the scene, is there any other automatic way to teleport the agent to the target object with correct rotation? Like getting the global rotation of the object? Thanks in advance. |
@mattdeitke However, I met some other problems. The first problem is for the scene FloorPlan1, even though the butterknife is listed in the event.metadata["objects"], it can't be found anywhere. I used the [demo]{https://ai2thor.allenai.org/demo/} to explore the spawned scene. The second problem is about re-positioning the object initialized in the scene. According to the API. I tried two actions of SetObjectPoses and PlaceObjectAtPoint but none of them worked. I wonder if I did something wrong and there is any other ways to move the targe object somewhere else. The reason for me to move objects is that I need some certain objects like apple and knife to be viewed in one image. I am open to any other solutions that achieve the purpose. Really appreciate your help. |
In FloorPlan1, the ButterKnife is on the table. It kinda blends in with it though.
Have you tried out InitialRandomSpawn? It is also present on the demo, and will allow objects to be randomized in position. |
@mattdeitke |
@mattdeitke After calling the action of SetObjectPoses, the image got from last_event won't be updated. Therefore I need to make the agent do some trivial actions like first turn left and then right. Then I can get the image I want. It seems not to be very reasonable. The question is still the one I had before. How I can get the global rotation or transformation of the agent and object? I followed the way you provided before but found that it can't guarantee that the agent will face to the target object after making the rotation. I will be very grateful if you could provide me with some hints about how to retrieve the global transformation. Thanks in advances. |
Please see #538. I suspect you are looking at the Unity window, instead of looking at
Can you clarify an example of where it failed? The only case I can think of where it might fail is based on if the agent needs to look up/down in order to view see an object. For instance, if an object is near the floor, it may not be visible unless the agent looks down. |
@mattdeitke However, I am still confused about the global coordinate system of AI2THOR. Using the default scene of FloorPlan1 for example, I attached the bird's eye view for illustration. I used relative positions betwen apple, butterknife and the camera of the agent, and found out the x and z axis are red and blue vectors, respectively. The y-axis is the green vector in the second figure. However this coordindate in the second figure doesn't follow the right hand rule. Did I do something wrong? Thank you so much for your help again. |
Unity's global coordinate system uses the left-handed alignment. It varies by game engine, but I personally think it makes the most sense when you think of the z-axis as the "forward" vector, with x as "right" and y as "up", just like on a 2D graph. All objects in a scene have their own local transforms as well, based on their position and orientation with respect to the scene's "origin", where the position and orientation are zero. Incidentally, our agent's default local-orientation is the world z-direction, since Unity treats that as "forward", so to run with Matt's previous example, rotating the agent to face an object from it's default rotation would look more like this: Once you wrap your head around which axes mean which directions, then it's indeed a simple process of trigonometry. For example, use the law of cosines with the triangle formed by the agent, the object, and a point directly in front of the agent's current sight-line to calculate ϑ easily. |
@elimvb My temporarily last question is about knife and butterknife objects. I found that only one will be spawned in the kitchen. Is there a way to check which one is generated without actually seeing it? Currently my strategy is to firstly teleport the agent to the location of one of objects. If the visible of that object is false, then I know the other object was generated. If there is a way not requiring teleport, it will be great. Thanks in advance. |
Can you check the object metadata for a from ai2thor.controller import Controller
controller = Controller(scene="FloorPlan24")
for obj in controller.last_event.metadata["objects"]:
if obj["objectType"] == "Knife":
# Knife exists in scene
break
elif obj["objectType"] == "ButterKnife":
# ButterKnife exists in scene
break |
@mattdeitke |
Hi all,
First of all, thanks for the amazing work. I am working on using AI2THOR to generate the dataset for my own project. I mainly want to take some photo-realistic images and don't really need the robot to perform navigation or manipulation.
I have some questions about AI2THOR with unity. The first question is how I can know the specific global location for placing the robot in order to take some images where specific objects are within the camera view. The second question is kind of similar. I wonder if it is possible that I spawn a scene and move around like what shown in demo such that I can have a better understanding of the scene I generate. If it is possible, it will save a lot of time for me to analyse the scene and determine what kind of objects in it and if I want to add some other objects.
Thanks in advance. Really appreciate any suggestions.
The text was updated successfully, but these errors were encountered: