-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helper function to translate world coords to screen space #1258
Helper function to translate world coords to screen space #1258
Conversation
Looks nice. Is the inverse functionality similarly simple? It would make sense to include in this PR, and is very useful for converting mouse / touch events into game events when clicking on units or so on. Down the line, I think it makes sense to merge in the full functionality of bevy_mod_picking, also apparently of your creation :) That's a larger scope though, and this function and its inverse are good, simple building blocks that I think would fit in well for 0.5. |
Unfortunately, the inverse is not as simple, hence the plugin. 😄 This calculation is trivial because it collapses a 3D point along the z-axis into the screen (2D), removing a degree of freedom. Conversely, transforming a (2D) screen space coordinate into world space (3D) leaves you with an unsolved degree of freedom - you are left with a line, or ray, that goes through the scene. Constructing that ray is as trivial as the code in this PR, in fact this code is the inverse of that ray-building code in the plugin, but it's not nearly as useful because you need to do something with that ray. That said, ray construction might be useful for some applications, and could be part of a larger discussion around how to break apart the picking plugin into useful, reusable components. The picking plugin does this to some extent already, you can construct rays using the mouse position, screen space coordinates, or with a manually defined Mat4. These rays can then be consumed by the raycasting system to detect intersections with meshes, and spit out the 3D coordinates you need. As for this PR, I think there is some useful discussion to be had about the interface of this function. Should it accept simple primitives like Mat4/Vec2/Vec3, or should we use the type system to poka-yoke the conversion (e.g. taking GlobalTransform and Camera as input, then using these to get the needed primitives)? This would prevent errors, like someone sticking the wrong Mat4 transform into the function, but might reduce its flexibility. |
Ah, the linear algebra is rushing back to me... I thought that might be the case.
I definitely prefer the latter: it makes the use much more obvious and avoids errors. Especially since this code is so simple, any strange use case can just duplicate it (or hack together a GlobalTransform etc). |
I agree. I tried this initially, but had some trouble when actually using the function in practice due to ownership issues when trying to pass those types as arguments. I'll take another crack at it. |
I've updated the interface to something that feels ergonomic and should prevent most errors. Note the To illustrate this, here is the system I've been using to test this functionality: fn update_text_position(
windows: Res<Windows>,
mut text_query: Query<&mut Style, With<FollowText>>,
mesh_query: Query<&GlobalTransform, With<Handle<Mesh>>>,
camera_query: Query<(&Camera, &GlobalTransform), With<ThreeDCam>>,
) {
for mesh_position in mesh_query.iter() {
for camera in camera_query.iter() {
for mut style in text_query.iter_mut() {
if let Some(coords) = world_to_screen_coordinate(mesh_position, camera, &windows) {
style.position.left = Val::Px(coords.x);
style.position.bottom = Val::Px(coords.y);
}
}
}
}
} Note that the The other change here is the function now returns the actual pixel coordinates, instead of the NDC, which better fits the Bevy spirit of ergonomics and Principle of Least Surprise. If a user doesn't know how to manually implement this, they are also probably unaware of what NDC are. This now requires a |
That looks really useful. |
@FrankenApps: sure thing, here you go: https://github.com/aevyrie/bevy_world_to_screenspace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sold! But I have a few small api tweaks in mind:
impl Camera {
/// Given coordinates in world space, use the camera and window information to compute the
/// screen space coordinates.
pub fn world_to_screen(
&self,
windows: &Windows,
camera_transform: &GlobalTransform,
world_position: Vec3,
) -> Option<Vec2> {
let window = windows.get(self.window)?;
let window_size = Vec2::new(window.width(), window.height());
// Build a transform to convert from world to NDC using camera data
let world_to_ndc: Mat4 =
self.projection_matrix * camera_transform.compute_matrix().inverse();
let ndc_space_coords: Vec3 = world_to_ndc.transform_point3(world_position);
// NDC z-values outside of 0 < z < 1 are behind the camera and are thus not in screen space
if ndc_space_coords.z < 0.0 || ndc_space_coords.z > 1.0 {
return None;
}
// Once in NDC space, we can discard the z element and rescale x/y to fit the screen
let screen_space_coords = (ndc_space_coords.truncate() + Vec2::one()) / 2.0 * window_size;
Some(screen_space_coords)
}
}
Great feedback, thanks. I've made the changes and have tested that it works locally. |
Add Camera::world_to_screen to convert world coordinates to screen space
As discussed here: https://discord.com/channels/691052431525675048/742884593551802431/800198280901296128
This adds a helper function to convert world space coordinates into screen space. This is useful any time you need to draw UI elements relative to objects in the world.
An example showing a cube's world space position being converted into 2D screen space coordinates for the UI text:
bevy.2021-01-20.01-55-03_Trim.mp4