-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow extensions to StandardMaterial #7820
Conversation
It looks like your PR is a breaking change, but you didn't provide a migration guide. Could you add some context on what users should update when this change get released in a new version of Bevy? |
Not sure if the PR description is not using the same formatting the CI check expects, but please ignore that comment. |
Does this provide an easy way to do something like "use shader code to generate a random color, feed that into the base color of the pbr shader"? Or can we only modify the output, not the input? |
you can modify the vertex output / pbr_fragment input, so for that particular case i guess you could use vertex colors and override the generated vertex color. you can't more generally change the material bindings since the pbr_fragment function reads them directly. for that you would have to go one level deeper and construct a I could introduce another function so that there's an entry point which takes a PbrInput and calls pbr plus does all the in-shader postprocessing as well. i think that sounds like a good idea. |
Yeah, exactly. I'm thinking of the use case where you want to make procedurally generated materials, but have them work with PBR lighting with proper material information. Like how blender lets you build a shader graph with inputs for the PBR parameters. |
What would be some use cases for modifying the outputs there as opposed to the inputs? I can see use cases for hooking in before the vertex shader (e.g. heightmapping) and before the fragment shader (e.g. animated texture) but I can't immediately think of anything I would want do that would come at those stages specifically. |
vertex output is fragment input, so this is the right place to do animated textures if you want smooth non-linear adjustments between vertexes. if you meant the fragment output, in the example i modified the output to do some kind of rough cell-shading type thing. i imagine some other mesh-specific (not full screen) post-processing could also fit here. |
This looks fantastic! 2 👍's up |
This PR looks great. While reading the code, I realised that the need for extending shaders/materials is not exclusively limited to PBR materials. Theoretically it would also make sense to extend a |
that's theoretically true, but in practice it's easy enough to make your own material for 2d. The built in 2d frag shader is only 10 lines long so you can copy/paste it without much drama. |
Yeah, this could (eventually) enable all sorts of interesting effects, such as decals and procedural water surfaces with parallax mapping. Maybe we could also have a “inversion of control” pattern, where instead of the extended shader owning the “true” entrypoint, there are multiple “hook”/"override" points that are called back by the main PBR shader in specific places, for example:
|
Isn't this going to behave differently depending on whether tone mapping happens in the PBR shader (non-HDR camera) or in a separate pass later (HDR camera)? |
yes it will. i'm not sure that's a huge issue since i would expect the user to use hdr or not exclusively (not mix them and expect the same results). and if necessary the shader defs could be used in the top-level frag shader as well to alter the behaviour. but i do agree that it would be nice to inject user functionality before the "post-processing" section of the main shader sequence (fog, debanding, etc as well as tonemapping). that, plus jasmine's suggestion about altering inputs, suggests that maybe we should have a handful of core pbr functions ( i'll try and bring this up to date and try that approach as well in the next week or so. |
Moving to 0.12. Seems useful but needs more reviews (and I'd like to consider the design a bit). |
assets/shaders/quantize_shader.wgsl
Outdated
fn fragment(in: FragmentInput) -> @location(0) vec4<f32> { | ||
// call to the standard pbr fragment shader | ||
var output_color = pbr_fragment(in); | ||
|
||
// we can then modify the results using the extended material data | ||
output_color = vec4<f32>(vec4<u32>(output_color * f32(my_extended_material.quantize_steps))) / f32(my_extended_material.quantize_steps); | ||
return output_color; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this example, does this mean there is no way to, for example, set the color before the shadows are added on top?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kind of, see the comment above plus the next couple. i will amend it so you can generate a PbrInput struct with a first function call, modify it, then pass it on to lighting / postprocessing with separate functions.
is_front, | ||
); | ||
// alpha discard | ||
pbr_input.material.base_color = alpha_discard(pbr_input.material, pbr_input.material.base_color); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really like this being done here. pbr_input_from_standard_material
does a lot of texture reads. discard
should be done as soon as possible, and as soon as we have the information needed to take the decision. Is there a good way to improve this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
robtfm pointed out that this is not a regression. I am glad that when code moves around, I see it with fresh eyes. And, as this is not a regression, this is not a blocking comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as we discussed, it's a little involved to calculate the gradient and include the mip_bias correctly, so i'd prefer to leave it for future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the discard was previously in pbr_functions::pbr()
which was well after all the texture samples (which are in pbr::fragment()
)
Co-authored-by: Robert Swain <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Massive usability improvement and I do think this is on the right path. I do have one idea to make extending feel a bit more natural and improve the ergonomics of using extended materials. I'm approving the PR (in its current state) because I see no reason to block progress on this discussion, but if we have time to discuss, I think its worth doing that before merging, as the changes wouldn't be significant.
App::new() | ||
.add_plugins(DefaultPlugins) | ||
.add_plugins(MaterialPlugin::< | ||
ExtendedMaterial<StandardMaterial, MyExtension>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that extending materials is likely going to be very common in practice, I think its worth considering making it a "first class" aspect of the core Material trait. Using the extended_material.rs
as an example:
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugins(MaterialPlugin::<QuantizedStandardMaterial>::default())
.run();
}
fn setup(
mut commands: Commands,
mut materials: ResMut<Assets<QuantizedStandardMaterial>>,
) {
// sphere
commands.spawn(MaterialMeshBundle {
material: materials.add(QuantizedStandardMaterial {
base: StandardMaterial {
base_color: Color::RED,
opaque_render_method: OpaqueRendererMethod::Auto,
..Default::default()
},
quantize_steps: 10,
}),
..default()
});
}
#[derive(Asset, AsBindGroup, TypePath, Debug, Clone)]
struct QuantizedStandardMaterial {
base: StandardMaterial,
#[uniform(100)]
quantize_steps: u32,
}
impl Material for QuantizedStandardMaterial {
type Base = StandardMaterial;
fn fragment_shader() -> ShaderRef {
"shaders/extended_material.wgsl".into()
}
}
impl GetBaseMaterial<StandardMaterial> for QuantizedStandardMaterial {
fn get_base_material(&self) -> &StandardMaterial {
&self.base
}
}
This improves a number of things ergonomically. It removes the need for the ExtendedMaterial wrapper when referencing the material type:
// This
Assets<QuantizedStandardMaterial>
// Versus this
Assets<ExtendedMaterial<StandardMaterial, QuantizedStandardMaterial>>
And also removes both the wrapper type and nesting when users define the material:
// This
QuantizedStandardMaterial {
base: StandardMaterial {
base_color: Color::RED,
opaque_render_method: OpaqueRendererMethod::Auto,
..Default::default()
},
quantize_steps: 3,
}
// Versus this
ExtendedMaterial {
base: StandardMaterial {
base_color: Color::RED,
opaque_render_method: OpaqueRendererMethod::Auto,
..Default::default()
},
extension: QuantizedStandardMaterial {
quantize_steps: 3
},
}
I've verified that this is expressible in the type system. This does have the downside of requiring this for "non-extended" materials:
impl Material for MyMaterial {
type Base = ();
fn fragment_shader() -> ShaderRef {
"foo.wgsl".into()
}
}
This UX hiccup would be resolved by "associated type defaults" (rust-lang/rust#29661). Although progress on this feature appears to have stalled.
We could also move the associated type to a normal generic:
trait Material<Base = ()> { }
Which enables this:
impl Material for MyMaterial {
fn fragment_shader() -> ShaderRef {
"foo.wgsl".into()
}
}
impl Material<StandardMaterial> for QuantizedStandardMaterial {
fn fragment_shader() -> ShaderRef {
"shaders/extended_material.wgsl".into()
}
}
But for Rust reasons, using a normal generic would mean that you need to re-specify the base material whenever you create a type that needs to support base materials (by explicitly naming Material<T>
):
.add_plugins(MaterialPlugin::<QuantizedStandardMaterial, StandardMaterial>::default())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that GetBaseMaterial
is a separate trait (rather than existing on Material
) in the interest of making impl Material for MyMaterial
not need to supply a get_base_material(&self) -> &()
method. With a separate trait, we can use a blanket impl on T
to auto-impl this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the suggestions. I think whether we merge or not then depends on what time @robtfm has available to implement and test it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one issue that i see with this approach - the extension is not a Material
but a MaterialExtension
, and ExtendedMaterial
works by applying Material + MaterialExtension => Material
. The base material defines the alpha mode, opaque render method and depth bias, while the extension only defines the shaders (plus data).
So we would need to somehow say that E: Material
(rather than E: MaterialExtension
is required if Base = ()
, or change it to just use a single material trait and introduce ambiguity on the depth_bias (easily solved by adding), alpha_mode and default opaque method (not easily solved but could be done by convention).
i'll think a bit more about this later, there might be a simple way to make it fit together that i'm missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One solution would be to make every "material value" return an Option
(and default to returning None). Slightly more boilerplate when defining them, but these properties are reasonably uncommon to set manually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the same thought about returning options. It’s not terrible.
Regarding chaining - using associated types will make the chain parent type fixed for a given extension. I guess it’s not a huge problem since the shader needs to have precise awareness of the parent anyway - at least right now, I was hoping to improve this in future though.
The long type can be somewhat improved by using a typedef.
But I guess overall it does seem better to use associated type. I’ll have a go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arg so sadly now that I've gotten into the details of this impl, I've realized that I missed an important piece:
This is a statically recursive type. For the Material trait (with Base
associated type) alone, this is expressible. And I believe we could even make it work by checking if the TypeId is ()
and breaking the "material property resolve" recursion when we hit it (ugly but I do think it would work).
However this will not work (and is not expressible) for the specialized pipeline key (because actual types cannot recurse like this).
Sadly that means that (with the current specialization system), I don't think we can do a unified trait.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Too many layers would probably also hurt specialisation performance as I guess the keys would grow and grow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to explore one more plan that defines an ExtendedMaterial trait, but implements Material for the extended type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok that plan hits a similar issue where we need the extending material type to implement AsBindGroup, but we need that AsBindGroup impl to return the combined bind group. I think that might be possible by adding "nesting" to the AsBindGroup derive. But we're entering "scope too big to make the change now" territory. I think we should roll with the impl in this PR for now.
# Objective - After #7820 example `array_texture` doesn't display anything ## Solution - Use the new name of the function in the shader
# Objective allow extending `Material`s (including the built in `StandardMaterial`) with custom vertex/fragment shaders and additional data, to easily get pbr lighting with custom modifications, or otherwise extend a base material. # Solution - added `ExtendedMaterial<B: Material, E: MaterialExtension>` which contains a base material and a user-defined extension. - added example `extended_material` showing how to use it - modified AsBindGroup to have "unprepared" functions that return raw resources / layout entries so that the extended material can combine them note: doesn't currently work with array resources, as i can't figure out how to make the OwnedBindingResource::get_binding() work, as wgpu requires a `&'a[&'a TextureView]` and i have a `Vec<TextureView>`. # Migration Guide manual implementations of `AsBindGroup` will need to be adjusted, the changes are pretty straightforward and can be seen in the diff for e.g. the `texture_binding_array` example. --------- Co-authored-by: Robert Swain <[email protected]>
# Objective - After bevyengine#7820 example `array_texture` doesn't display anything ## Solution - Use the new name of the function in the shader
# Objective allow extending `Material`s (including the built in `StandardMaterial`) with custom vertex/fragment shaders and additional data, to easily get pbr lighting with custom modifications, or otherwise extend a base material. # Solution - added `ExtendedMaterial<B: Material, E: MaterialExtension>` which contains a base material and a user-defined extension. - added example `extended_material` showing how to use it - modified AsBindGroup to have "unprepared" functions that return raw resources / layout entries so that the extended material can combine them note: doesn't currently work with array resources, as i can't figure out how to make the OwnedBindingResource::get_binding() work, as wgpu requires a `&'a[&'a TextureView]` and i have a `Vec<TextureView>`. # Migration Guide manual implementations of `AsBindGroup` will need to be adjusted, the changes are pretty straightforward and can be seen in the diff for e.g. the `texture_binding_array` example. --------- Co-authored-by: Robert Swain <[email protected]>
# Objective - After bevyengine#7820 example `array_texture` doesn't display anything ## Solution - Use the new name of the function in the shader
Objective
allow extending
Material
s (including the built inStandardMaterial
) with custom vertex/fragment shaders and additional data, to easily get pbr lighting with custom modifications, or otherwise extend a base material.Solution
ExtendedMaterial<B: Material, E: MaterialExtension>
which contains a base material and a user-defined extension.extended_material
showing how to use itnote: doesn't currently work with array resources, as i can't figure out how to make the OwnedBindingResource::get_binding() work, as wgpu requires a
&'a[&'a TextureView]
and i have aVec<TextureView>
.Migration Guide
manual implementations of
AsBindGroup
will need to be adjusted, the changes are pretty straightforward and can be seen in the diff for e.g. thetexture_binding_array
example.