-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: API for saving a RenderContext to a PietImage #387
Conversation
Question: could this be done in client code - where you keep an image in your widget and use it as a cache? EDIT: Ah I see, you need to be able to get a copy of the image you've rendered? |
@derekdreery your statement in the edit is correct. The thing I'm doing is:
Then I can always restore from the image I've saved in step two and then just draw the mouse over portion on top again, which is much faster then rendering the complex widget. I suppose I could create a widget-owned In general I feel that it'd be desirable to have the ability to write a widget to an image, be it in memory or in a file (e.g. exporting a visualization, or for performance reasons in my case). |
I love the idea! One question: what do you do if your widget contains transparency, which should show whatever is underneath? Would another option for implementation be to have some sort of I think this is how Flutter works (IIRC). |
Mmh yeah transparency whould definitely be an issue with my current approach 🤔 Your suggestion with the second render context seems reasonable. Though it does seem like an API like the one I originally proposed would still be required for this, that is, some standard way of turning a render context into an image that can later be drawn again. I think this would also make e.g. this example of saving to a PNG a bit nicer, as let mut rc = bitmap.render_context();
[... some drawing ...]
let image = rc.save_image(...);
PngEncoder::save_to_file(image)?; Or something like that 🤔 |
Personal opinion: I think being able to render to images would be very useful, both for caching and for other things as well! |
I can help implement this on at least Linux once the design is agreed. |
Okay, so thinking this through: the idea is that you would draw into the render context as usual, but then at the end you would call 'save_image' and you would get back an image that reflect the current state of the render context? It is probably worth looking into whether this functionality is available on the major platforms, but I suspect it probably is. I'm trying to think through whether there are any major issues with this approach, and I think I'm slightly out of my depth, so I'm going to seek out a second opinion (cc @raphlinus) but overall if this is doable it seems like a nice bit of utility. Some possible concerns: dragging a window between monitors might require invalidation of the cache? There's not currently a 'changed window' event in druid. |
I think this is possible but not quite what I was thinking. The concern about transparency is already noted. I'm also concerned that reading back from the presentation surface may be a performance concern - these things are generally optimized for writing into but not reading from. For the roadmap, what I'd like to do here is create a render context specifically to hold a "layer," and doing the drawing operations into the layer. At the end, the render context would be harvested into an image. An important note: the same architecture works for recording drawing operations into a display list (for playback later); the only difference is the type of what you get back. That is an especially important path when the display list can be retained on GPU (see piet-gpu vision). Ultimately, I'd like management of layers to be done semi-automatically in Druid, so that widgets that are not highly dynamic are drawn into layers, and the paint cycle replays those (see Flutter for inspiration). However, it seems to me that manually creating a render context for the task may work for this particular use case. Can @x3ro confirm? I have some ideas about the layer API (in particular, I'd really like to unlock being able to create them from other threads), but I think detailed design work on that should probably be in a separate issue. |
Exactly. If I'm not mistaken, this is roughly equivalent to Flutter's toImage method. From the documentation:
My knowledge is very limited im this area, though this was somewhat surprising to me. I figured that any of these frameworks would also have to be fast at reading, since at some point the internal representation must be converted to pixel data that's displayed on screen at hopefully 60 FPS :D To the extent that I understand the Maybe it's just me, but talking about it as an "image" that's being saved may also be a bit misleading, since I'm really just copying the internal representation of whatever framework we're drawing with. Are you thinking of something like copy_raw_pixels when you say it could be slow? If it is, that's not what I'm proposing.
How does this differ from saving the context to an image, i.e. why do the mentioned performance concerns not apply here?
I could probably try to replicate what's being done in the PNG example, i.e. create my own device, bitmap target and then render context. I'll readily admit that this doesn't super desirable, considering that I'd also have to synchronise size and pixel scale with Druid. Long story short: to my (admittedly untrained) eye it seems like having the ability to create a cached copy of the contents of a Piet |
I'm coming from a GPU perspective thinking about performance. In Vulkan, there's no guarantee that the swapchain even supports image reading (see https://community.khronos.org/t/readpixels-on-vulkan/6797/2). Of course it has to in some form, usually the compositor reading from it (unless fullscreen or something else fancy), but generally what happens there is that there is a transition under control of the platform/compositor. Before that transition, there is no guarantee. So the only way to reliably get an image out of a render target is to create that render target configured for that operation. I am probably conflating two things in my mind here: there are challenges in getting an image out which is to be stored in GPU memory (see above, plus some synchronization fiddliness), but it's doable. On the other hand, reading image bytes to CPU memory tends to be ridiculously slow. But on reflection, I don't think there's anything about your proposed API that requires that. A Vulkan-based renderer is somewhat in the future. The bigger question is to what extent existing backends support the operation you describe. I think Direct2D may be a problem - GetBitmap is implemented on a bitmap render target, but I'm not sure there's any standard way to get image data out of a render target created for a swapchain. It's possible I'm missing something, but if that's the case then I don't really see how to do this on Windows, and that's a problem. It is true you have to match up size and hi-dpi scaling. I should add, I'm not questioning whether this is worthwhile or whether you're seeing a significant performance improvement. My only concern is that we're doing this in a way that doesn't box us in with GPU accelerated rendering. |
Thanks for expanding on this. I'm quite interested in learning how a GPU-based backend would work, but only started learning in that direction. For now I'm happy to investigate how this could be implemented for the Direct2D backend. |
On Wayland, which I have some experience of, in software mode you just share memory with the compositor, so reading out of framebuffers is the same as writing. I'm using cairo to render into that framebuffer basically as an image. Things are different when you move onto the GPU. Here Linux uses DMA which allows the compositor and the application to share memory, but I don't know how the details work. Wayland has support for DMA. |
Okay so I haven't made much progress with D2D, mostly because of all of the screaming I've been doing, which then prompts the neighbours to complain. No but seriously, I haven't really touched this style of API ever, so I think I need to take a step back and build an MVP in C++ first, then try to translate it to Rust 😅 |
Finally had some time to look into this more, and it seems like I found a working solution, though whether it's suitable for piet is hard for me to judge. The entire VS solution can be found here, but the relevant part is: // m_pRenderTarget is a ID2D1HwndRenderTarget which inherits from ID2D1RenderTarget,
// same as ID2D1DeviceContext which piet's d2d DeviceContext renders to. So CopyFromRenderTarget
// should work for the both of them, but I have not verified this yet.
auto targetSize = m_pRenderTarget->GetPixelSize();
auto pixelFormat = m_pRenderTarget->GetPixelFormat();
ID2D1Bitmap* cacheBitmap;
DX::ThrowIfFailed(m_pRenderTarget->CreateBitmap(
targetSize,
D2D1::BitmapProperties(pixelFormat),
&cacheBitmap
));
auto destPoint = D2D1::Point2U(0, 0);
auto srcRect = D2D1::RectU(0, 0, targetSize.width, targetSize.height);
DX::ThrowIfFailed(
cacheBitmap->CopyFromRenderTarget(&destPoint, m_pRenderTarget, &srcRect));
m_pRenderTarget->BeginDraw();
m_pRenderTarget->SetTransform(D2D1::Matrix3x2F::Identity());
m_pRenderTarget->Clear(D2D1::ColorF(D2D1::ColorF::White));
auto destRect = D2D1::RectF(25.f, 25.f, 250.f, 250.f);
auto srcRectF = D2D1::RectF(
0.f,
0.f,
static_cast<float>(targetSize.width),
static_cast<float>(targetSize.height)
);
m_pRenderTarget->DrawBitmap(
cacheBitmap,
&destRect,
1.f,
D2D1_BITMAP_INTERPOLATION_MODE_LINEAR,
&srcRectF
);
DX::ThrowIfFailed(m_pRenderTarget->EndDraw()); When I find time, my next step would be to try to translate this to Rust, but if someone else is up for it I'd be happy to accept the help :D |
@raphlinus Would you be able to spare some time to look over the solution and roughly gauge whether it makes any sense? I'm very new to D2D / Windows APIs so it's really hard for me to judge 😅 |
This looks reasonable to me. Making this work in rust also shouldn't be too tricky, the API calls all have the same names, and there's lots of other examples of using these APIs in the direct2d backend. :) |
80ad84a
to
c8b8504
Compare
@x3ro yea, these sorts of differences are generally expected. |
@derekdreery do you want to give a linux implementation a try? I think the signature should be final (@cmyr @raphlinus?) though I don't really like the name. |
I guess it's worth starting from what the method does, which is to take the current state of some region of the render context, and writes it to an image. So maybe... |
And as per linux, I would just add an implementation that, for now, returns |
I think Sounds good @ |
I think the fact that this may be cheap is sort of a platform implementation detail; it would not be unreasonable for a platform to be storing a list of drawing operations and only applying them lazily, for instance; I don't think we can really make any strong performance guarantees here at the API level. I don't necessarily think Maybe... |
Mmh, from the ones you proposed I do like |
@x3ro sounds good! |
c8b8504
to
c59f2b6
Compare
Unmarked this so that the pipelines would run. Still have to do the cleanup for d2d but don't have access to a Windows machine right now. |
dc4d22c
to
86dcd16
Compare
@x3ro I'm going to do a piet release soon, and I'd love to include this. Is there anything I can do to help get it over the line? Would you like me to do the windows implementation? |
Hey. My last weeks have been a bit turbulent, but I'd definitely have time to get this done on Friday. Would that work? |
@x3ro realistically yes, friday sounds like a reasonable target. :) |
Almost done, will try to push an update today |
I've pushed the changes that I'd made, but then I realized it appears to be more broken than I'd expected:
There are also some errors in the d2d target that I'm not seeing locally but in CI. Long story short, I probably won't be able to finish this in the coming work week. If you have the time @cmyr you can pick it up if you like. Otherwise I'll do my best to get it into a mergeable state as soon as I can. |
The ubuntu snapshots aren't your fault; the other stuff I'll take a look at. Thanks! |
inner: ComPtr::from_raw(foo), | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@x3ro do you have any context for these two methods? I don't see it used anywhere and I'm not sure what it's trying to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh lord. If they're not used anymore they can definitely be removed. I had some problems with the windows APIs reacting differently to different kinds of "null-ish" pointers, so this might've been me experimenting and then forgetting to remove them 😱
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, I figured it was something like this; at least nothing broke when I removed them. 😅
This functionality was added in linebender/piet#387 but it never worked correctly. This will hopefully be fixed by linebender/piet#513.
Hey folks 🙂 The app I'm trying to write w/ Druid is simple replica of Grand Perspective. This requires me to render lots and lots of rectangles, hundreds of thousands if people are scanning their entire drives.
Based on this discussion in Zulip, I went ahead and threw together a proof of concept for a function in
RenderContext
that lets me cache the currently rendered version and then just write it back every frame as if it was an image.A usage example:
Please let me know what you think. I've verified that this works in the very small scenario that I'm testing on macOS, but that's the extent of my testing so far.
PS: I've also pushed the code where I'm using this, in case that helps make things clearer 😅
Related to linebender/druid#1499