Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth textures are consistently read as 0 in compute shaders #30070

Closed
Spiri0 opened this issue Dec 9, 2024 · 6 comments · Fixed by #30082
Closed

Depth textures are consistently read as 0 in compute shaders #30070

Spiri0 opened this issue Dec 9, 2024 · 6 comments · Fixed by #30082
Milestone

Comments

@Spiri0
Copy link
Contributor

Spiri0 commented Dec 9, 2024

Description

Meanwhile I try to solve problems myself and do PRs. But that always takes a lot of time because I still have a lot to learn about threejs.
It looks like depthTextures don't work in compute shaders. I noticed this when I tried to do culling in the GPU. The depth values ​​are all 0. This leads to all objects being viewed as covered and becoming invisible.
I thought of an example that makes the problem visible by trying to copy the depthTexture with a compute shader. And in fact, everything is 0, so a black image. This means the depthTexture really doesn't work in the compute shader. For further control, I simply used a scalar value in the texture var depth = 0.95; As expected, this delivers a gray texture that the postProcessing shader outputs.

In fragment shaders I use depthTexture: f32 But in compute shader you have to use texture_depth_2d because with f32 you get an error message. I suspect the problem is that multisampling is used, which is problematic in compute shaders.

Reproduction steps

Create a depthTexture with depthPass.
Create a compute shader that copies textures and assign the depthTexture to it as the texture to be copied.
Create a postProcessing shader that uses the copied depthTexture for control.

But the example is simpler, I've already done it

Code

It's not about copying depth textures in this way. This is just an example to get visual feedback as to whether the depthTexture is being read in the compute shader.
See example. I currently used the depthTexture directly in the postProcessing shader so that you can see what would be expected if the copy from the compute shader worked.

For the texture from the compute shader you have to use // 1 in the postProcessing shader and comment out // 2. Additionally depthTexture: f32, replace with depthTexture: texture_2d<f32>.

this.sceneDepthPass = depthPass( this.scene, this.camera );
this.sceneDepthPassTexture = this.sceneDepthPass.getTextureNode( 'depth' );

const copyDepthTextureWGSL = wgslFn(`
  fn computeWGSL( 
    writeTex: texture_storage_2d<rgba32float, write>,
    readTex: texture_depth_2d,
    index: u32,
    size: f32,
  ) -> void {
        
    var posX = index % u32( size );
    var posY = index / u32( size );
    var idx = vec2u( posX, posY );
        
    var depth = textureLoad( readTex, idx, 0 );
    textureStore(writeTex, idx, vec4<f32>( depth, depth, depth, 1 ) );
  }
`);

this.copyDepthTexture = new THREE.StorageTexture( window.innerWidth, window.innerHeight );
this.copyDepthTexture.type = THREE.FloatType;

this.computeCopyDepthTexture = copyDepthTextureWGSL( { 
  size: uniform( window.innerWidth ),
  readTex: this.sceneDepthPassTexture,
  writeTex: textureStore( this.copyDepthTexture ),
  index: instanceIndex 
} ).compute( window.innerWidth * window.innerHeight );

this.renderer.compute( this.computeCopyDepthTexture );

Live example

https://codepen.io/Spiri0/pen/LEPNdBd

Screenshots

No response

Version

171

Device

No response

Browser

No response

OS

No response

@Spiri0 Spiri0 changed the title Depth textures are consistently read as 0 in computer shaders Depth textures are consistently read as 0 in compute shaders Dec 9, 2024
Repository owner deleted a comment from yuvashrikarunakaran Dec 9, 2024
@sunag sunag added this to the r172 milestone Dec 9, 2024
@sunag
Copy link
Collaborator

sunag commented Dec 10, 2024

This PR should solve this issue and multisampled depth. Some code changes were necessary.

import * as THREE from "three/webgpu";
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
import { wgslFn, vec3, vec4, uniform } from "three/tsl";
import { pass, depthPass, viewportUV, screenUV, instanceIndex, textureStore, texture } from "three/tsl";


class App {
    constructor() {
    }

    async init() {

        this.renderer = new THREE.WebGPURenderer({ 
            canvas: document.createElement('canvas'),
            antialias: true
        });

        this.renderer.outputColorSpace = THREE.SRGBColorSpace;
        this.renderer.setPixelRatio(window.devicePixelRatio);
        this.renderer.shadowMap.enabled = true;
        this.renderer.shadowMap.type = THREE.PCFSoftShadowMap;
        this.renderer.physicallyCorrectLights = true;
        this.renderer.domElement.id = 'threejs';
        this.renderer.setSize(window.innerWidth, window.innerHeight);
        this.renderer.setClearColor( 0x000000 );
        document.body.appendChild( this.renderer.domElement );

        await this.renderer.init();

        this.scene = new THREE.Scene();
        this.scene.background = new THREE.Color( 0x00001f );
        this.camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 0.01, 10);
        this.camera.position.set(1, 1, 1);
        this.controls = new OrbitControls(this.camera, this.renderer.domElement);
        this.controls.target.set(0, 0, 0);
        this.controls.update();

        const material = new THREE.MeshBasicNodeMaterial();
        material.colorNode = vec4( 0, 0.25, 0.75, 1 ); 
        const geometry = new THREE.BoxGeometry( 1, 1, 1 );
        const box = new THREE.Mesh( geometry, material );
        this.scene.add( box ); 

        this.postProcessing = new THREE.PostProcessing( this.renderer );
      
      
		    this.sceneDepthPass = depthPass( this.scene, this.camera, { samples: this.renderer.samples } );
		    this.sceneDepthPassTexture = this.sceneDepthPass.getTextureNode( 'depth' );

        const copyDepthTextureWGSL = wgslFn(`
            fn computeWGSL( 
                writeTex: texture_storage_2d<rgba32float, write>,
                readTex: texture_depth_multisampled_2d,
                index: u32,
                size: f32,
            ) -> void {
        
                var posX = index % u32( size );
                var posY = index / u32( size );
                var idx = vec2u( posX, posY );
        
                var depth = textureLoad( readTex, idx, 0 );
                //var depth = .999;
                textureStore(writeTex, idx, vec4<f32>( depth, depth, depth, depth ) );
            }
        `);
  
        this.copyDepthTexture = new THREE.StorageTexture( window.innerWidth, window.innerHeight );
        this.copyDepthTexture.type = THREE.FloatType;

		    this.computeCopyDepthTexture = copyDepthTextureWGSL( { 
			      size: uniform( window.innerWidth ),
			      readTex: this.sceneDepthPassTexture,
			      writeTex: textureStore( this.copyDepthTexture ),
			      index: instanceIndex 
		    } ).compute( window.innerWidth * window.innerHeight );

        const a = vec3( 0 ).add( 0 );
		const b = a.x.add( a.y );


        const shaderParams = {
            uv: viewportUV,
            depthTexture: this.sceneDepthPass, // 2
            depthTexture: texture( this.copyDepthTexture ), // 1
            near: this.camera.near,
            far: this.camera.far,
            width: uniform( window.innerWidth ),
            height: uniform( window.innerHeight ).add( b )
        }
          
                //depthTexture: texture_2d<f32>,
        const fragmentShader = wgslFn(`
            fn main_fragment(
                uv: vec2<f32>,
                depthTexture: f32,
                near: f32,
                far: f32,
                width: f32,
                height: f32
            ) -> vec4<f32> {
            
                //let depth = textureLoad( depthTexture, vec2u( u32( uv.x * width ), u32( uv.y * height ) ), 0 ).r; // 1
                //let zLinear = ( near * far ) / ( far - depth * ( far - near ) );  // 1

                let zLinear = ( near * far ) / ( far - depthTexture * ( far - near ) ); // 2

                let zNormalized = ( zLinear - near ) / ( far - near );

                return vec4( vec3( zNormalized ), 1.0 );
            }
        `);
        
        this.postProcessing.outputNode = this.sceneDepthPass.next( fragmentShader( shaderParams ) );

        window.addEventListener( "resize", this.onWindowResize, false );

        this.render();
    }

    //---------------------------------------------------------------

    render() {

	this.renderer.compute( this.computeCopyDepthTexture );

        this.postProcessing.render();

        requestAnimationFrame(() => {
            this.render();
        });
    }

    onWindowResize() {
        if( this.camera ) {
            this.camera.aspect = window.innerWidth / window.innerHeight;
            this.camera.updateProjectionMatrix();
            this.renderer.setSize( window.innerWidth, window.innerHeight );
        }
    }
}


const app = new App();
await app.init();

@Spiri0
Copy link
Contributor Author

Spiri0 commented Dec 10, 2024

I just tested it with your extensions. It works. I don't need the post-processing, it was just for testing. But it made doubly sense because I found out that the textures from pass and depthPass when I assign them to a compute shader always have a size of 1 x 1, so they remain initial. Only if I previously run a fragment shader to which I have passed pass and depthPass texture are they also initialized for the compute shader.

I suspect that this is the important element in your modification of my example:

this.sceneDepthPass.next( fragmentShader( shaderParams ) );

Is it a good option to assign this directly to the compute shader?

this.compute = computeShader( { 

    depthTexture: this.sceneDepthPass.next( someFragmentShader( someShaderParams ) )

} ).compute( size );

The order is often compute shader -> vertex shader -> fragment shader. And I'm wondering what the best option might be to assign pass and depthPass texture to the compute shader, which is executed before the fragment shader has been executed the first time.

In any case, I'm really happy about the expansion because culling works with it. Good job Sunag, as always 👍

@Spiri0
Copy link
Contributor Author

Spiri0 commented Dec 11, 2024

It is surpricing that the normal rendering do not update depthPass for the compute shader so that I have to rely on post-processing for an update. I have to take a look at it calmly because speculation doesn't help.

@Spiri0
Copy link
Contributor Author

Spiri0 commented Dec 11, 2024

The node system is awesome and I like it a lot but sometimes a more basic approach is more practical. I have created a renderTarget depthTexture which I render every inverval before the scene. So only 6 lines or 7 if you also count the resize. I only mention this here in case it might be important for other topics in the future.

from: https://threejs.org/examples/?q=depth#webgpu_depth_texture

this.renderTarget = new THREE.RenderTarget( this.width, this.height );
this.renderTarget.depthTexture = new THREE.DepthTexture();
this.renderTarget.depthTexture.type = THREE.FloatType;

This will automatically initialize it to the maximum depth at the very beginning when it is first rendered

this.renderer.setRenderTarget( this.renderTarget );
this.renderer.render( this.scene, this.camera );
this.renderer.setRenderTarget( null );

which is very convenient for GPU culling. The depthTexture of the renderTarget can be assigned to the compute shader with the texure node and used with texture_depth_2d in the compute shader.

see the triangle numbers

Thanks to the GPU frustum visibility check, the frame rate is already at its maximum. Culling itself would not really be necessary. But with culling you can make it even easier for weaker devices. If the struct extension also makes it into r172, threejs will have everything important together by christmas to be able to efficiently create endless forests or cities.

@sunag
Copy link
Collaborator

sunag commented Dec 11, 2024

I'm also in doubt about the need for next(), do you still need it in your project? I would just add the other improvements from the PR.

@Spiri0
Copy link
Contributor Author

Spiri0 commented Dec 11, 2024

I recommend not using the next part. The reference to other shaders seems to me to be unnecessary effort when you can easily set the initialization time precisely with the renderTarget. I only used postprocessing for analysis purposes and it also works with the renderTarget, I tested. On the user side, the renderTarget is easy to use.
It used to not be usable for something like this because it led to read write conflicts if you can still remember that. This must have been fixed by one of your extensions at some point in the last few months. I thought I would just test it again and now it works fine with shaders.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants