-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Upgrade to wgpu 0.11 #2933
Conversation
@@ -31,7 +31,8 @@ pub struct WgpuRenderResourceContext { | |||
} | |||
|
|||
pub const COPY_BYTES_PER_ROW_ALIGNMENT: usize = wgpu::COPY_BYTES_PER_ROW_ALIGNMENT as usize; | |||
pub const BIND_BUFFER_ALIGNMENT: usize = wgpu::BIND_BUFFER_ALIGNMENT as usize; | |||
// TODO: fix this? | |||
pub const BIND_BUFFER_ALIGNMENT: usize = 256; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably the most contentious change here. Wgpu now requires access to the Device to query alignment. This is easy enough to deal with in the new renderer, but piping this through is work I don't feel particularly inclined to do for the old renderer. 256 is the "common" alignment I've seen. I'm inclined to just hard-code it here, knowing that we will be removing all of this code before the next release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds reasonable. 256 is also required to be supported conservatively by WebGPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah given that it's being cut anyways don't waste time trying to find a better solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just left a few comments to clarify some things.
for window in windows.values_mut() { | ||
if let Some(texture_view) = window.swap_chain_texture.take() { | ||
if let Some(surface_texture) = texture_view.take_surface_texture() { | ||
surface_texture.present(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure this is the "correct" way of presenting to multiple windows (just doing each one sequentially in a loop), in terms of avoiding unexpected lag / perf issues? I think I recall some conversation about present calls blocking on some drivers (nvidia?) and I'm not sure if that's relevant here.
I don't really know, this code may very well be just fine as it is, just wanted to point out something that might be worth double-checking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it was @cwfitzgerald who was saying that moving present / acquire to a separate thread is a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd like to cover that in a separate PR as it will require some experimentation / design discussions. I'd love to discuss this though!
min_binding_size: bind_type.get_uniform_size().and_then(wgpu::BufferSize::new), | ||
// FIXME: The line below cause a validation error | ||
// min_binding_size: bind_type.get_uniform_size().and_then(wgpu::BufferSize::new), | ||
min_binding_size: None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this a wgpu bug? or validation layer bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(not that it matters, this code is in the old renderer)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We (wgpu) are missing validation for minimum binding size on draw, so if you specify None, we'll just let you under-specify the amount of data.
bors r+ |
Upgrades both the old and new renderer to wgpu 0.11 (and naga 0.7). This builds on @zicklag's work here #2556. Co-authored-by: Carter Anderson <[email protected]>
Build failed: |
Looks like the alien_cake_addict example (using the old renderer) is failing to run on CI due to hitting some limits. |
Lets see if using the "adapter limits" helps with that. Not a default I want to carry over into the new renderer, but I want to get this merged. |
Well, in released games we probably want to default to adapter limits. But while developing, we generally want minimums, not maximums. |
bors r+ |
Upgrades both the old and new renderer to wgpu 0.11 (and naga 0.7). This builds on @zicklag's work here #2556. Co-authored-by: Carter Anderson <[email protected]>
Pull request successfully merged into pipelined-rendering. Build succeeded: |
The fix originally got introduced in [#2858](https://github.com/bevyengine/bevy/pull/2858/files#diff-0f34eeda7ac2fe1f9e9b27de92d9290e0b360ffa6f032770aff22b5fef4eaa63R137-R143) but got lost in the upgrade to wgpu 0.11 at #2933
Upgrades both the old and new renderer to wgpu 0.11 (and naga 0.7). This builds on @zicklag's work here #2556.