-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output screen texture to png #1207
Comments
also related: #22 |
I think Issue #22 is higher priority, or basically the same thing, because it has the advantage of disconnecting the camera from a view screen, and eliminates the bug where re-sizing the window crashes the game. I'm happy to help out on this issue and get this moving forward, but I would need somebody to walk me through the render pipeline and explain how it works because I spent a day poking at its internals and couldn't make sense of the design decisions, and how all the parts work together. |
I started working on this branch, a fork of @TheRawMeatball 's branch where I tried to implement an example based on his code, and fixed some bugs in the code. The The example I've been working on is Outstanding problems:
|
@ctangell take a look here: https://github.com/mrk-its/bevy/tree/render_to_texture - it adds working 3d/render_to_texture.rs example. |
I'm also interested in getting this working. @ctangell I'm not sure if this still matters, but there's a problem with the code you used to write the PNG file:
The That said, I still haven't figured out how to make it work, and saving a JPEG results in a black image. @mrk-its That's a great example of rendering to a texture, thanks! I've been playing with this example and trying to save the texture to an image file, but still unsuccessfully. Not to hijack this issue, but I'm still trying to figure out how the render graph works. There are a few edges in your example that don't seem to be needed for it to work, namely these:
and
Could you please let me know if they're really needed an why? Thanks! |
@mrk-its Nevermind, I actually figured out why those edges are needed. I guess I was lucky that it worked without it. I've finally managed to get it working. I've started with the render_to_texture example, and then implemented a custom render node that inputs the |
Here's a working (but hacky) example of rendering to a jpeg file: https://github.com/rmsc/bevy/tree/render_to_file |
Summarizing related feedback from
|
I have a use case which is basically a subset of this. I guess it's similar to the egui issue, but not specific to egui. I would like to be able to render textures to another texture, like a textureatlas but more generally. (As a one-time thing when called; that texture then shouldn't get re-rendered from its sources.) This is useful in a number of ways, like dynamically generating sprites or panels that don't need to be re-rendered, or generating a map from many tile textures. (Ideally, that texture could then be used in multiple places.) As I understand the render-to-texture example, that actually does an each-frame render — the texture basically becomes a live view into another world. My need is more like the save-to-file case, except... don't actually write, but instead modify the texture in place. I'm imagining this working basically like
which would put the bits from It would be nice for there also to be:
where of course
It's my understanding that as currently written, the functions to create a TextureAtlas just use CPU and main memory. So this feature could actually be used to refactor that — TextureAtlas is basically a use-case of what I'm looking for. There could also be a "blit" version of this which uses fancy material/shader stuff, but I'm a humble old-school 2d person and don't understand any of that. :) Also beyond my understanding, but: if this is using the same basic rendering pipeline as drawing to the screen (even though only on demand rather than part of the loop), this could then automatically benefit from future improvements like render batching and culling. I can file this all as a separate RFE if that would be helpful, but it seems everything here is all so related that it might actually be close to being solved as basically a side-effect. |
As a natural extension of this, it would be very nice to be able to natively capture the screen (or a camera view) to a .gif animation file. |
Is there a less hacky / simpler solution to read a camera's pixels since? (for saving to a file for instance) |
What problem does this solve or what need does it fill?
Would enable using Bevy as a general purpose simulation tool for robotics and machine learning research by allowing the screen texture to piped over to a machine learning engine, or for incorporating direct inference in Bevy.
What solution would you like?
The actual solution that would be nice is to have an additional node that can be attached to the render pipeline that will return a png of what is shown on the screen. Being able to scale that also so it's not produced every frame but rather every 10 frames or 100 frames would be helpful for performance issues.
I have tried on my own the following so far:
created in
WgpuRenderResourceContext
the following code to copy a texture to a buffer:and then created in
impl RenderResourceContext for WgpuRenderResourceContext
the following function to get the buffer out of the gpu:and then tried to implement in
impl Node for WindowTextureNode
fn update
:Unfortunately that is as far as I got as the resulting png image is empty (what comes out is an array of zeros). Ideally this should be it's own node that's attached to the final end of the render pipeline after the screen texture is written.
What alternative(s) have you considered?
Not understanding how the render pipeline works, chose to try
window_texture_node.rs
as that is the only place with the necessarybevy_render::texture::TextureDescriptor
for the screen buffer. From a comment on discord it seems that really the texture to extract is more likely inwindow_swapchain_node.rs
. The problem is, there isn't the necessary information in aTextureDescriptor
in that node in order to do the necessary texture -> buffer -> png copying. So some type of information passing from theWindowTextureNode
(where the relevantTextureDescriptor
is stored) to finally which ever node actually has the access to the final screen texture is needed.I tried to understand the default render pipeline in
base.rs
but couldn't make much sense of what was being passed around.Additional context
The above code causes the game to crash when the window is re-sized.
Additionally, a compute shader that converts the depth buffer to a scaled u16 buffer scaled to match the output from a physical depth camera (like an intel realsense) would be super handy.
The text was updated successfully, but these errors were encountered: