-
-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider adding option of downscaling before blur #413
Comments
Have you measured this? Skia (both in Android and CMP) will automatically downsample in its blur implementation for high blur radius values, so I’m wondering if a manual scale is actually doing anything. Also, a graphics layers isn’t like a bitmap. Everything in a graphics layer is deferred until draw time, so applying a scale down, and then scale up is likely being no-op’d. We do copy content across graphics layers though, there is a chance that the scale does have an effect. Easy to test either way. |
Oh it is definitely not being no-op'd. With low scaleFactor and detailed shaders I can reach pixelation, although there is interpolation happening which hides it pretty well. I was testing things by having a fullscreen shader with blur applied. Not very scientific, but without downscale had 80-90% gpu usage, with downscale got it down to 30-40%. Would be better if I did some profiling. I could try checking later how the blur behaves by itself because I wasn't familiar with skia doing that automatically. It is very possible there wouldn't be any improvement purely for blur, but it is hard to believe as in that case for large blurs performance drop would be negligible which is not really the case 🤔 |
I’ve got something kind of working in #416, so will get some benchmarks ran tomorrow. |
Just circling back. I've just ran the usual benchmarks with a |
I really appreciate you taking a chance to experiment with the idea 🙇🚀 |
Hey there, this is probably one of my favourite compose libraries 🏅
There is only one minus I see with it and that is the performance. While it isn't inherently bad, it doesn't leave much room for using other gpu bound features such as shaders.
In my application I implemented downscaling modifier for components where I place the layout at a certain scale factor lets say 0.5f for quarter resolution and then expand it with graphicsLayer by 1 / 0.5f. Visually it remains full size.
One of the ways that would add a big benefit is by having blur calculated at much lower radius, basically
blurRadius * scaleFactor
. Myself I've tried it for this usecase and best part is that it doesn't seem to introduce any visible artifacting while significantly lowering gpu overhead. Downscaling wouldn't need to be default behaviour, but it could be a nice parameter to have. I could also see it working the opposite way where the ammount of blur you choose dictates how much downscaling is applied. What do you think? :)The text was updated successfully, but these errors were encountered: