-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use 0-frame delay for syncing physics transforms to render transforms (instead of 1-frame delay) #1828
Comments
Did you use |
Thanks for the comment Calinou. So the update order looks like this in 3.2.3
Hence, the _integrate_forces and _physics_process yields the same result. The current setup will work nice with multithreading. Because the step function can be spawned to a new thread. But for single threaded applications we have an opportunity to take the physics transform already at the end of the step function and apply it to the render transforms. I have made a proof of concept that shows how to flush transform and end of stepping, here: I only applied the patch to 3D rigid bodies. So it's still missing the transform flush for bones and 2D. I heard some rumours that there might be a refactor to the main loop so I hope this can be taken into consideration when that happens. It's really cool that the Godot source code is clean and easy to read :). It makes is super easy to do local tweaks like this. /H |
For single thread game this solution sounds fine. Though, We are planning to make physics run in multithread scenario and I think that this model of 1 frame behind is not enough, to truly allow the physics run in a dedicated thread. My idea is to run the scripts that deal with the physics all together with it. The transforms are sync each frame but updated when available. In this way it's possible to lower input latency and truly run physics in its own thread. Though, its not expected that for each frame a new transform is submitted from the physics to the rendering. If you think about it, even with the model you propose the rendering would not have a new transform each frame anyway, and most likely the player doesn't even notices that the rendering is 1 frame behind. Though each case is special, so probably would make sense for your game run physics with 0 delay, but I doubt this is the route we should follow upstream; considering that we have plans to run physics in MT and this model (physics 1 frame behind rendering) will be changed in one way or the other. |
Thanks Andrea, Sounds like you’re one step ahead of me :). Applying transforms as soon as they are available sounds good to me. In my use case we run physics at 30fps and render at 60fps. We also interpolate, so this add some extra delay. In practice this means game would be 4-3 render frames late. And with my hack it is 2-1 render frames late. I did a blind test with my team and they said they noticed a difference. But as said, my patch is just a local hack. I’m looking forward to see what you guys come up with in the future. Cheers, H |
Could @huhund's suggestion be implemented in the |
@lawnjelly Is this approach commpatible with your physics interpolation implementation? |
I'm not super familiar with how the updates are done from the physics (more @pouleyKetchoupp 's area) but in principle I'm not sure there is any difference for interpolation, all it needs is a regular update of transforms on physics ticks - whether these are a tick behind or not is down to the physics, the interpolation code is the same one way or another. Indeed @huhund above says they are using interpolation with this scheme. |
Yes, we interpolate the positions in our own game code. The only difference is that we interpolate between "current_frame-1, current_frame" instead of default "current_frame-2, current_frame-1". |
@huhund If you have time, could you open a pull request with your changes (preferably against the |
Any updates on this? |
To my knowledge, nobody is currently working on this. I think this is worth pursuing, but keep in mind it can't be done with multi-threaded physics simulation without destroying its purpose (performance gains through parallelism). That said, since most projects are doing just fine with single-threaded physics, I think this will be beneficial overall. |
I agree it would be nice to have the option just for single threaded physics. Here's a short clip of moving an Area3D with two child nodes: A Sphere CollisionShape3D (drawn with the white circle) and a Sprite3D (the Godot icon). A second Area3D also with a Sphere is placed to the right and is listening for collisions. Engine.PhysicsTicksPerSecond is set to 3fps in order to more clearly show the discrepency between the render and physics transforms. Movement is being performed simply by altering the Area3D's Position, called from within PhysicsProcess(). I assume this is a result of what has been discussed in this thread? |
Describe the project you are working on:
Any game that uses physics and requires minimal lag. However, this would only work for single threaded physics using Godot physics. (It can't be applied to Bullet because of how Bullet handles pretick).
Describe the problem or limitation you are having in your project:
It would be nice if physics xforms are applied immediately to render xforms after a physics tick. Currently they are applied one frame late.
Describe the feature / enhancement and how it helps to overcome the problem or limitation:
E.g. It would make games more responsive to joystick input. Also beneficial to multiplayer games, e.g. if a server runs at 30fps it would reduce overall lag with 33ms.
Describe how your proposal will work, with code, pseudocode, mockups, and/or diagrams:
Current update looks like this:
As we see the render frames still have the old xforms.
Suggestion is to have something like this:
If this enhancement will not be used often, can it be worked around with a few lines of script?:
This is part of the main loop in main.cpp
Is there a reason why this should be core and not an add-on in the asset library?:
This is part of the main loop in main.cpp
The text was updated successfully, but these errors were encountered: