-
-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OIDN Denoiser. #663
base: main
Are you sure you want to change the base?
Add OIDN Denoiser. #663
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a smattering of comments for some of the things that stuck out to me. Once some of the architecture is nailed down and it's more clear to me exactly what's happening I can help address some of the color space and tone mapping issues.
src/core/utils/DenoiserManager.js
Outdated
// Run the denoiser | ||
const denoisedWebGLTexture = await this.denoiser.execute(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see some logs about tiling happening - are the tiles being rendered over multiple frames? It would be nice if the page didn't lock up quite so much when denoising is happening. The tiles could also be used to display a completion percentage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tile generation we could probably wire up to wait for a frame before going to the next tile (allowing less lockup) but I wonder if this would slow things down on higher speed machines.Tensorflow has a method for this
Tile reconstruction (which is the second half) should happen as fast as possible, I'm not sure we want to divide that up.
I'll log down both strategies to try.
One thing to remember, With the WebGL backend the first execution is SIGNIFICANTLY longer than the others. Tensorflow waits to compile shaders until you actually execute.It's suggested we run a few images through to "prime" the model but this would cause it to be slow at load time which I was avoiding.
I could see a "ready()" callback that fires after a warm up image is passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the WebGL backend the first execution is SIGNIFICANTLY longer than the others. Tensorflow waits to compile shaders until you actually execute
Is there any way to use the KHR_parallel_shader_compile extension so the shaders will compile asynchronously? Three.js implements this with WebGLRenderer.compileAsync, which the PathTracer uses here so the page does not block while the path tracing shader compiles.
- Moved the denoise class external - Renamed class OIDNDenoiser - Moved many props to denoiser directly - Updated example with moved location - removed weird imports - removed yarn lockfile - minor cleanups
Simplified example and removed blocks for renderScale. Now works with different render sizes. Oddly, normals/albedos also work
ALbedos still seem off. Cant for the life of me get the background to go into srgb
I actually already tonemap the texture prior to it going to the denoiser. The reason I use the blendMaterial (besides it being so useful) is that it doesn't apply anything else. At the point of the blend it should have the original perfect output (need to add the ability to remove tonemapping from mine) and the correct denoiser output. At the moment I'm super happy with the color result. The pathtraced and denoised look great. The Albedo I dont know and I'll try to find time to mess with it. |
The quality does look really nice - but there is some color shift once it goes through the denoiser. It's just not clear why to me, yet. I'll have to take a look this weekend. |
Removed complex albedo gen and added material pool. Added PP step to albedo gen so colors correct
Material Pool system added. We now create materials as needed and set the values based on the objects original material. Removed the complex albedo generator for just a meshBasic material. It works for now but is likely going to have transparency and reflectance issues. Added a PP pass that correctly tonemaps and converts the albedo so now it matches the original pathtracer beauty output when we send it to the denoiser. I think this will fix any oddities you see in the colors. |
Worked on the pipeline Added own materials to not break originals
I can take a look at the colors
Do you have a reference for this? I would think ideally tone mapping is always applied before colors have been quantized into 8 bits to avoid banding but it shouldn't be a huge deal. We can take a look at this later I think.
We should be able to just set the mesh opacity and alpha maps to help with this. Transmission is more complicated because we can't correctly model the refraction that happens in that case.
Interesting - I haven't seen any tiling seems before so my assumption is that this is related to the denoiser. I'll try to keep an eye on this why looking at the color processing.
Avoiding the lock up I think would be great. The two things that come to mind are performing tiles over multiple frames and performing async material compile so other tasks can run while compiling is happening. An event or flag that specifies how complete the denoising process is (just 0 to 1) would let us display it. And likewise a way to "cancel" the denoiser for cases where the camera moves, scene changes, etc would be nice. |
@@ -0,0 +1,779 @@ | |||
import { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think ideally this example would just include a single model to keep it focused and simple. An ideal model would include something with opacity and transmission so we can evaluate these kinds of problem cases. See the HDR example or DoF example. Though this is something that can be adjusted later if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get that, and we can certainly limit the models. But I think it easier to switch to models that highlight the edge cases specifically vs trying to find a single model that fits every one.
LMK what you think and I'll adjust it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think just one model is good for now - for debugging we can swap between models to check issues. I think it's best to have a more simple example for users to reference.
get hdr() { | ||
|
||
return this.denoiser.hdr; | ||
|
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is hdr
functional right now? It looks like we always pass sRGB into the denoiser
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was.
Technically the denosier can already support it.
While working out the color issues I matching the original system as much as possible. this included passing the hdr texture.
When passing HDR and using "fast" quality I got HORRIBLE results, but using balanced was excellent. That's why I added the auto change to those props.
When trying to address the color issues in LDR I resolved it and found the system to be more stable with fast/balanced quality while using LDR.
If you don't ever plan to pass HDR to the denoiser or don't like leaving future handling code I can remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha - is it the case that we just have to pass the original LinearSRGB Path Traced texture when hdr
is true and we'll get a LinearSRGB texture out? I can try it a bit to see how it looks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if we pass that texture as the input the denoiser will return it in the same range (or at least it should, I've only tested a few times)
OIDN says that in HDR 1 should be a luminance level of 100 cd/m^2
so for best results it should be similar.
If you do that you may want to disable the SRGBToLinear conversion that takes place on the denoiser result. I do this so we can correctly apply the ToneMapper.
One thing to consider would be dropping the SRGB > Denoiser requirement. This would mean textures dont need to be converted when returned and can be tonemapped directly
For sure not trying to add burden, I'm just copying the examples that are already in use with the PBR example so if they change or get removed it's just as easy to remove them here. Yes you can run the denoiser in the PBR example, but you can't run the split comparison or look at how the transparent, reflective, metallic, etc objects generate albedos/normals etc. Instead of finding a super-model that tests everything in one go, it might be easier to use the same models we already load and see how they compare to each other. Whatever you want do do though is fine with me. I can always make my own example on the main denoiser site with more models.
This is a mistake. This comes from battling color conversions. Regardless of the colorspace of inputs Normals should be in Linear. I will adjust this, The denosier doesn't actually use normals in the common sense, but this will better match training data.
It's actually implemented from what I can tell but it's burried in github discussions and not doccumented. I will experiment with this upstream.
I am going to work on a flowchart of the denoiser so it's easier to explain, but right now the progress is only measuring the inference/tiling stage which executes with frame blocking to allow the callbacks. The next steps execute in a single step and theres no way to call back progress until it's totally done. The biggest delay is probably syncing from gpu to cpu.
Weird, I've added it to my list to put in the WebGLStateManager class (although I thought it was there)
Calling a draw call means Threejs takes over the state. I'm sure this messes things up. With the new abort flag the denoiser shouldn't output anything after the abort is called so if you need to draw, abort the denoiser and it should stop anything returning. It's a new feature so if it doesn't work right lmk and I'll try and test for it
Weird, I thought this got resolved with the correct tonemapping etc. We can actually test pretty easily. using Hopefully it's just a colorspace thing. The next thing I could think would be an albedos/normal issue? One thing I did also try is using more samples. Using more samples seemed to stop the denoiser guessing so much on things.. |
For users trying to understand the denoiser settings available I don't think this necessary to demonstrate for multiple models. And for dev we can change the models for testing. I'll update this after the PR is merged, though.
The relative scale of the values being correct should be an improvement
I figured the webgl state for TF was being saved after the TF function exited and would be restored to what TF needs when re starting the TF work. Or are these "starts" and "ends" something not in our control?
Using linear data would simplify things. I'd be surprised if the reason is just number of samples, though. The top of those lego studs are almost all gray converted to white and some of the studs in the bottom right are erased completely. It seems to primarily be a problem with white? |
Something in the recent commits has broken things... When you rotate after denoise the denoised image turns to the blank canvas. Also, the denoiser is being marked dirty on every run so some property is being set over and over. I'll try to go through the commits to find it |
Ok the marking dirty is from removing the check if the size has changed. I will work on this upstream to protect against it. However the blend not blending is still a problem. Both introduced in "some cleanup" |
Just push a line I forgot to commit - sorry about that. You're talking specifically about the denoiser.html example, right? I'm not seeing issues in the index.html page.
|
Interesting I fixed it a different way. You change the scissorTest value from false to true but never reset it, if you set it to false after the render it also fixes. I thought it was an issue with the blending of the denoised over the noisy not the scissor test. Once I realized it wasn't on the index test I found the scissor issue. |
this.denoiser.weightsUrl = 'https://cdn.jsdelivr.net/npm/denoiser/tzas'; | ||
this.denoiser.onProgress( progress => this.handleProgress( progress ) ); | ||
//this.denoiser.hdr = true; | ||
this.denoiser.webglStateManager.ingoreRestore = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spelling issue - ingoreRestore -> ignoreRestore. Can we add an inline comment describing why we need this
Nice! Can you point to where this parallel compile is being done? If you attempt to use the shader after starting the parallel compile before it's done then the driver will still stall the page. The goal here is less of a speed improvement and more preventing the page from stalling.
This is an issue with DirectX that's unavoidable. For some reason it takes forever to compile shaders. The Chrome / ANGLE team have said it's a known issue. The use of async compile should mean the page isn't completely locked while the shader compiles, though. This isCompiling flag is used in
I took a look today and it seems to be due to clamping colors when converting from the Float texture to Uint8 target. This clamps and loses data above 1.0 used in the final tone mapping step which is why we're losing some of the bright highlights in the final image.
The normals on the left are packed into the range [ 0, 1 ] - looking at the OIDN docs again it looks like they're supposed to be in the range [ -1, 1 ], which I must have misread previously:
This is looking good, though. Looks like just a few final things. |
Finally have some time to circle back on this and wanted to see what remaining things are needed. Regarding HDR. I've had hit or miss results with it and found the standard def had better results. We can start working in HDR but wouldn't that make everything HDR or would we clamp it down at the end? Regarding Normals: I'll have to double check but I'm pretty sure I'm already remapping to [-1, 1] during the texture setting phase (I assume most users and shaders will use [0, 1]) |
So, we can try working in HDR, but is that the output you want from the pathtracer? The denoiser will output HDR if we want it to. But I didn't know if that's what the pathtracer should send to the texture. We'll need to test a bit as I recall it had some issues. Otherwise I wonder if we can move the clamping to after the tonemap step.
I'm not sure what you mean exactly, right now I take a standard normal texture as input and re-map it. In my base renderer this seems to work ok. |
The goal would be to denoise in HDR then tone map the image to SDR. We could tone map before denoising to fix the color clamping but this way is more flexible and let's the user change the tone mapping without denoising and also let's you save a denoised HDR image. Right now the final result from the path tracer in a rendertarget is an HDR float buffer that is tone mapped when when written to the canvas.
What I'm wondering is what happens when a component is negative. If using an rgb8 texture (as the normal textures are here) any negative components will be clamped to 0 and there will be no distinction in normals. What's right thing to do here? Should we be using float textures? Does OIDN support that? |
This PR Adds the OIDN Denoiser (Denoiser) into the core three-gpu-pathtracer.
It works, and looks decent but there are clear issues to resolve.
API Added
maxSamples
Pathtracer now stops when reaching this number. Also is the number the system denoises atenableDenoiser
If we are using the denoiser (NOTE, the denosier actually initializes regardless of this for webGL compat)weightsUrl
Lets you bypass the jsDelivr URL for your own location of the TZAs. Also how you could pass your own weights in generaldenoiserQuality
Fast by default, Balanced is better. "high" is really heavy and not supported in the Denoiser yetuseAux
Whether or not you want to use just the pathtraced input (fast, but bad) or with the AuxrenderAux
If you want to view the aux inputs set this toalbedo
ornormal
null
gets you the denoised outputcleanAux
If you are 100% the aux inputs are clean, ANY noise will be projected into the output (NOTE, upstream fix may be needed)externalAux
If the user is supplying their own Aux Textures. disables automatic generationdenoiserDebugging
Sets the denoiser to log more things including an output report.State API Properties (read-only)
isDenoising
If the denoiser is runningisDenoised
if the current output is the denoised outputMethods
generateAux()
runs the albedo/normals pass on the current scene. returns{ albedo: THREE.Texture, normal: THREE.Texture }
setAuxTextures(albedoTexture, normalTexture)
if the user wants to set the aux themselves, like in a pipeline or has a deferred setup. Deactiviates internal automatic creation (this.externalAux = true
)denoiseSample(inputTexture, albedoTexture, normalTexture)
loads automatically or can be overridden letting you send to the denoiser directly to theDenoiserManager
Not sure about exposing this, called internally byrenderSample
Changes to Core
maxSamples
and it's stopping the pathtracer when reached.One change was added to the
BlendingMaterial
to allow a conversion to happen to texture2. While we resolve the colorspace issues it may be useful.Additions
Denoiser
class directlyFlow With Denoising
Assuming Denoiser is enabled and all options default/set
maxSamples
is hitdenoiseSample
calls until finishedpathtracer.target.texture
), AlbedoTexture, and NormalTexture sent to the DenoiserManager.WebGLTextures
from internals of input textures (THREE.Texture
s)denoisedWebGLTexture
outputTexture
( one of the many steps to get WebGL/Three to play nice)denoisedWebGLTexture
with the outputTexturedenoisedTexture
is held in the class ready to be renderedrenderSample
call now thatisDenoised
is true, render usingDenoiserManager.renderOutput()
BlendingMaterial
denoisedTexture
(Note: if RenderAux is passed, will render the aux texture provided so you can see either the albedo or normal textures visually. Not sure the normals render how you'd expect, as they should be outputting in [-1, 1])If reset is called it all drops back to normal, hides the outputs etc and starts over.
Known Issues
1. ColorSpaces/tonemapping/conversion: Something is clearly not setup right through the pipeline regarding colors. The denoiser DOES NOT CARE what colorspace you input. Whatever you input, it will output. Color and Albedo inputs should be in the same colorspace, and normals are expected to be in linear. The output looks correct but dark. converting makes it look flat.
This will take tweaking and someone smarter than me with regards to color to follow the renderTargets (all setup with
THREE.SRGBColorSpace
and what we should convert/adjust2. Normal Generation: I have a script setup to use worldspace or viewspace normals and output to [-1, 1]. I don't know if colorSpace/RT's might be effecting this (I don't think it matters). Also, I have added code to accept normalMaps on meshes so they can be used instead of the raw surface. I tried converting the NormalMaps to worldspace with tragic results. For the moment If an object has a map I use it.
Something very important to point out about normals and the denoiser. The denoiser does not actually use them for normal values or any kind of light calculations whatsoever. They can be in world/local space. The normal maps just help define edges and breaks in materials.
3. Albedo Generation: In The OIDN Docs they go into detail about the albedos. In general, most matte surfaces can use just the basic color output or textures. This isn't true for highly reflective surfaces (like the floor). Here albedo wants something different, you can read the docs and it says something to the effect that the reflected surface should have the albedo of whatever they are reflecting, or a full 1 value. It gets even weirder for reflections.
4. Edges with floor and background. Looking at original OIDN examples they do not expect black backgrounds or hard edges where soft edges blend with the original background. The normals/albedo read those as hard crisp edges or flat planes. So the gradient background gets flattened into a weird flat gradient and the floor has sharp edges. So with threejs backgrounds or envMaps we should generate something for both of these passes and include any floor/horizons.
The only one of these issues I see as a blocker is the colorspaces. Generating albedos and normals etc we can work on for a while and it would be fine to release improvements as updates. But gotta get closer on the color outputs IMO.
Other Changes
I updated the plain example (index) to include denoiser props and some other slight adjustments. I added a rawOutput canvas to the html which is very easy to setup when debugging to see exactly what the denoiser is outputting and confirm your inputs/outputs.
Notes
I tried commenting a lot to explain what is happening, and I realize my commits are terribly labeled. I was focusing on getting this added as the main thing and not making a lot of gradual changes.