Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodes: Add VelocityNode and MotionBlurNode. #29058

Merged
merged 25 commits into from
Aug 9, 2024
Merged

Nodes: Add VelocityNode and MotionBlurNode. #29058

merged 25 commits into from
Aug 9, 2024

Conversation

Mugen87
Copy link
Collaborator

@Mugen87 Mugen87 commented Aug 2, 2024

Related issue: -

Description

The PR ports VelocityShader to WebGPURenderer. The logic for producing velocity vectors can be moved into a single new module VelocityNode.

The ability for accessing motion vectors is important for stuff like motion blur or TRAA. Original PR #23784.

MotionBlurNode is based on #14510.

examples/webgpu_mrt.html Outdated Show resolved Hide resolved
examples/webgpu_mrt.html Outdated Show resolved Hide resolved
Copy link

github-actions bot commented Aug 2, 2024

📦 Bundle size

Full ESM build, minified and gzipped.

Filesize dev Filesize PR Diff
685.1 kB (169.6 kB) 685.1 kB (169.6 kB) +0 B

🌳 Bundle size after tree-shaking

Minimal build including a renderer, camera, empty scene, and dependencies.

Filesize dev Filesize PR Diff
462 kB (111.5 kB) 462 kB (111.5 kB) +0 B

@sunag sunag added this to the r168 milestone Aug 2, 2024
@sunag
Copy link
Collaborator

sunag commented Aug 3, 2024

For some reason camera.position is not generating velocity.

@WestLangley
Copy link
Collaborator

FWIW, I am getting something reasonable by making these two changes:

webgpu_mrt.html: comment out the background texture. Otherwise, the model always shows zero velocity.

scene.background = texture;
//scene.background = texture;

VelocityNode.js: amplify the velocity

//let velocity = sub( ndcPositionCurrent, ndcPositionPrevious ).mul( 0.5 );
let velocity = sub( ndcPositionCurrent, ndcPositionPrevious ).mul( 100 );

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 3, 2024

comment out the background texture. Otherwise, the model always shows zero velocity.

This was also responsible for no velocity when panning the camera.

I have updated the example by retaining the background but using selective velocity. So the default value for the scene and thus the background is "no velocity". Objects that should receive a motion blur like the helmet write motion vectors.

This setup is required for per-object motion blur anyway. When adding an example for per-object motion blur, I'll revert the changes in webgpu_mrt to keep it as simple as possible.

@Mugen87 Mugen87 marked this pull request as draft August 3, 2024 10:39
@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 3, 2024

I've added MotionBlurNode and webgpu_postprocessing_motion_blur so we have a separate example for testing.

One open issue is motion blur with animated models. In-place animations do not change the world position of the model so without further enhancements it would not be possible to detect velocity changes with SkinnedMesh. This can done like in #14510 by saving the bones of the previous frame and use them in the shader to compute the diff. In essence, we need an additional positionLocal node which represents the state of the previous frame.

@Mugen87 Mugen87 changed the title Nodes: Add VelocityNode. Nodes: Add VelocityNode and MotionBlurNode. Aug 3, 2024
@sunag
Copy link
Collaborator

sunag commented Aug 5, 2024

It seems that if we have the clipPositionCurrent in a MRT texture, we can compare it with a previous texture using ping/pong. It seems to solve this and be less costly for the gpu. We would have problems duplicating all the other types of vertex transformations if we clone the matrices as in the previous approach.

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 5, 2024

That sounds good!

It seems that if we have the clipPositionCurrent in a MRT texture, we can compare it with a previous texture using ping/pong.

If we have a separate export for clipPositionCurrent, it could be used to configure the MRT node. But the ping/pong is something I don't understand. How would it look like to manage two textures?

@sunag
Copy link
Collaborator

sunag commented Aug 5, 2024

I think interessing we have support for alternating textures through an API, I'll try to create something along those lines.

@sunag
Copy link
Collaborator

sunag commented Aug 6, 2024

It's work :), will test now with skinning

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 6, 2024

It's impressive how the amount of code was reduced with this new approach 😳.

@sunag sunag marked this pull request as ready for review August 6, 2024 15:52
@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 7, 2024

It seems the motion blur effect does not work with animated models, yet.

I'm also not sure if the current setup is appropriate for per-object motion blur. The entire scene is effected but sometimes you want to only specific objects have a motion blur or with different intensity.

@Mugen87 Mugen87 marked this pull request as draft August 7, 2024 08:46
@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 7, 2024

TBH, I found the previous setup a bit more natural. Meaning you configure a velocity output via MRT and then assign the color and velocity node to the motion blur pass.

It would be nice if we could hide the "previous data" step somehow in MotionBlurNode and don't let it leak into the application.

@sunag
Copy link
Collaborator

sunag commented Aug 7, 2024

I had noticed some problems, then I don't merge this, it seems that the problem now is that we are comparing the NDC position, but it is the difference between the matrices that makes the NDC position calculation correct, perhaps storing the matrix values in MRT texture ​​should solve it.

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 7, 2024

If we just store the matrices, it seems we need a special solution for skinned meshes again, correct?

In this case, I would suggest to revert to the original version which provided the previous world matrix as a uniform. If skinning is used, the previous bone matrices can be shared as a uniform as well. I was not able to finish this bit but I guess it's worth giving it a shot.

@sunag
Copy link
Collaborator

sunag commented Aug 7, 2024

If we just store the matrices, it seems we need a special solution for skinned meshes again, correct?

The idea is provide an automated solution for this. I think skinned would be just one of many other implementations that should be done like morph, instances, batch and custom positionNode with the previous approach. If MRT works it would still be the best path in terms of performance and maintenance, but if it doesn't work I'll revert it. I may need a few more days to test these ideas 😇

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 7, 2024

No need to rush^^! We should take our time with this topic and explore the different options.

@sunag
Copy link
Collaborator

sunag commented Aug 8, 2024

I updated the example

image

@sunag
Copy link
Collaborator

sunag commented Aug 8, 2024

@Mugen87 From what I understand, the comparison is made using MVP and then converted to NDC by dividing by w. The approach I took stores the positionLocal in an MRT texture and then uses it in the next frame, calculating the MVP of each object, thus obtaining individual speed. I did not deform the geometry like most motion blurs for objects have, for this it is not so evident in the character, it is possible to see the motion vector just by adjusting the GUI. This seems to me to be the best approach until now, although not yet definitive, I think that the skinned motion vector can be improved in the internal regions as well.

const scenePass = pass( scene, camera );

const positionPrevious = scenePass.getPreviousTextureNode( 'position' );

scenePass.setMRT( mrt( {
	output,
	position: positionLocal,
	velocity: velocity( positionPrevious.uv( viewportTopLeft ) ).xy
} ) );

const beauty = scenePass.getTextureNode();

const vel = scenePass.getTextureNode( 'velocity' ).mul( blurAmount );

const mBlur = motionBlur( beauty, vel );

@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 8, 2024

Sounds fine! The implementation is definitely a good start.

@sunag
Copy link
Collaborator

sunag commented Aug 8, 2024

I had to go back closer to where it was, the approach wouldn't work for skinned since I couldn't access exactly the previous pixel (at least not easily), so in order not to extend this feature too much I decided to go back. Anyway I'm happy with the tests, there were some important corrections and improvements made.

@Mugen87 Thanks for the initiative, there are so many recent improvements, I've had problems before with WebGLRenderer trying to use this feature, and in the new renderer we can have it through MRT, which is fantastic.

image

@sunag sunag marked this pull request as ready for review August 8, 2024 23:54
@sunag sunag merged commit 6b126f1 into mrdoob:dev Aug 9, 2024
12 checks passed
@Mugen87
Copy link
Collaborator Author

Mugen87 commented Aug 9, 2024

Also thanks to @gkjohnson! #14510 was a good starting point. Glad to see the implementation finally landed in this repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants