-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Compatibility with various renderpipeline and use cases #14
Comments
That is a great suggestion if such improve can tight NSprites closely to unity standard rendering pipeline. But I'm completely lost with all this rendering spaghetti in unity. Can you enlighten me? For example if |
So, if I understand you correctly with any SRP (URP in particular in your example) we can just use command buffer and it will automatically render sprites + we can edit URP asset settings to tell system when we want to render our sprites. But for legacy renderer we would need to adapt using of command buffer with custom render hook and also manually recycle command buffer. If I describe things right then can you please write a simple example where command buffer used instead of direct access to |
Yes.
The above example is just a simple illustration of how it might go but the dependencies are in a mess. Have yet to figure out a proper dependency design for the system.
|
The idea where we need a |
For SRPs, we do not need a Monobehaviour/Camera to execute the CommandBuffer. SRPs render execution provides a ScriptableRenderContext which we use for execution of the CommandBuffer. https://docs.unity3d.com/ScriptReference/Rendering.ScriptableRenderContext.ExecuteCommandBuffer.html SRP render hierarchy:
For most SRPs, we could simply provide a CommandBuffer getter for however relevant render pipeline chooses to integrate NSprites. Another alternative is to use the following hook on RenderPipelineManager: As for legacy, things are a bit trickier. Because we do not have a way of interacting with the current rendering context other than Camera, it is kind of restrictive if operating in multi-camera environment(i.e. HUD over gameplay). If we restrict it to only operating on a single camera, we could always utilize Camera.main or Camera.current. Note + Apologies: In my previous comment, I mistook SRP for legacy rendering, which led to the use of Monobehaviour in the example. Sorry for the confusion. |
We can make SRP's a default way of working with NSprites, especially when unity make it more and more default. So this is how we can avoid using mono-bridge. Though for legacy we can have |
Yes. PackageDependencies:
As a simple check for whether one is using SRP(from projects that I have worked on), we could actually do a simple |
There is managed graph TD;
SRP_Extension-->RenderArchetypeStorage;
Legacy-Mono-Bridge-->RenderArchetypeStorage;
Can you please extend this diagram because I'm completely lost in terms of what is SRP feature and what is SRP extensions and how things should be provided. |
So the SRP-Extensions would be extending Core RP modules for SRPs to implement NSpritesRendering. graph TD;
NSpritesRenderFeature-->NSpritesRenderPass;
NSpritesRenderPass-->RenderArchetypeStorage;
Legacy-Mono-Bridge-->RenderArchetypeStorage;
For most SRPs, a simple NSpritesRenderPass would suffice. public class NSpritesRenderPass : ScriptableRenderPass
{
RenderArchetypeStorage _storage
public NSpritesRenderPass(RenderArchetypeStorage storage)
{
_storage = storage;
}
public void SetRenderArchetypeStorage(RenderArchetypeStorage storage)
{
_storage = storage;
}
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
var cmdbuffer = _storage.GetCommandBuffers();
context.ExecuteCommandBuffer(cmdbuffer);
}
} For the standard URP/HDRP, we can go on to prepare NSpritesRenderFeature that will wrap the NSpritesRenderPass for the renderers. [Serializable]
public class NSpritesRenderFeature : ScriptableRendererFeature
{
[SerializeField] RenderPassEvent _renderPassEvent = RenderPassEvent.AfterOpaques;
protected NSpritesRenderPass _renderPass;
public override void Create()
{
_renderPass = new NSpritesRenderPass(null); //This is completely independent of any Monobehaviours, I would need a way of obtaining the RenderArchetypeStorage
}
public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
{
_renderPass.renderPassEvent = _renderPassEvent;
renderer.EnqueuePass(_postProcessPass);
}
public override void SetupRenderPasses(ScriptableRenderer renderer, in RenderingData renderingData)
{
_renderPass.SetRenderArchetypeStorage(null);//for updating scene archetypeStorage changes
}
} |
I have just got an idea for the Legacy-Bridge. [RequireComponent(typeof(Camera))]
public class NSpritesCamera : Monobehaviour
{
void OnEnable{
//Add CommandBuffer to Camera
}
void OnDisable{
//Remove CommandBuffer on Camera
}
} |
I'm sorry for being that laggy in understanding things. That is because I need to combine my eng interpretation with new for me SRP stuff and how architecture should be changed to keep package handy for users. |
Unfortunately, it is difficult to get any form of articles or documentation regarding SRP. The people working with it(me included) have gotten used to it after practice and digging into the source code.
I was thinking about that too. Would probably need some form of management for the commandbuffers. On a side note, I have gotten permission from my CTO to work part-time on this, as I really do not want to reinvent the wheel to make my own data structs for sprite controls |
Discussion here |
I believe it can be divided into two steps: NSprite is responsible for data generation using the jobsystem, and then the data generated by NSprite is injected into SRP, which is driven by SRP.
}` |
This is just a suggestion.
The following line uses Graphics to draw.
NSprites/Rendering/Common/RenderArchetype.cs
Line 576 in abf1821
This would result in unexpected drawing order when drawing in a sophisticated environment such as having multiple cameras.
Would it be better to use a command buffer to perform the draw operation and add it to necessary pipeline?
One could also add it back to Graphics when desired
https://docs.unity3d.com/ScriptReference/Graphics.ExecuteCommandBuffer.html
or add to specific desired camera
https://docs.unity3d.com/ScriptReference/Camera.AddCommandBuffer.html
This would provide support for hybrid rendering
The text was updated successfully, but these errors were encountered: