Cannot read from depth buffer in FS

Hello!

I’ve recently started work on a game engine, but I’ve run into a slight hiccup - I cannot get my fragment shader to read a depth texture. Just to be clear, the fragment shader can read from color textures and the depth texture itself is working - I verified that with glReadPixels - but the fragmentShader just won’t read the depth data.

Setup:
I have one FBO, fbo that I use for my first pass. I then want to visualize the data in fbo.depthTexture by drawing it on screen.
Here is the revelant code:

Main Game Loop:


            // Draw my original scene (consisting of one model) to fbo.
            fbo.bind();
            shader.bind();
            glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
            model.render();

            // Unbind fbo (bind default fbo, ie, screen) and draw the fbo.depthTexture.
            fbo.unbind();
            fullscreenQuadTexturedShader.bind();
            glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
            fullscreenQuadTexturedShader.setInt("tex", 0);
            // Uncommenting the next line would bind the 0th (first) color attachment at GL_TEXTURE0.
            // fbo.bindColorTexture(0, 0);
            fbo.bindDepthTexture(0); // Bind the depthTexture to GL_TEXTURE0
            sqMesh.render();

            // Debugging code

            fbo.bind();

            ByteBuffer buffer = BufferUtils.createByteBuffer(display.getWidth() * display.getHeight());
            glReadPixels(0, 0, display.getWidth(), display.getHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, buffer);
            for (int i = 0; i < display.getWidth() * display.getHeight(); i++) {
                int depth = Byte.toUnsignedInt(buffer.get(i)); // This works perfectly.
            }

Fragment Shader:


#version 330

in vec2 fsTexCoord;

out vec3 finalColor;

uniform sampler2D tex;

void main()
{
    vec4 texColor = texture(tex, fsTexCoord);
    finalColor = texColor.rrr;
}

Relevant FBO functions:


    public FBO attachDepthTexture() {
        depthTextureId = glGenTextures();
        glBindTexture(GL_TEXTURE_2D, depthTextureId);

        // I've also tried GL_DEPTH_STENCIL with GL_DEPTH_STENCIL and GL_UNSIGNED_INT_24_8, but got the same results.
        glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
        // Added for experimentation
        // glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTextureId, 0);

        glBindTexture(GL_TEXTURE_2D, 0);

        return this;
    }

    public void bindDepthTexture(int location) {
        glActiveTexture(GL_TEXTURE0 + location);
        glBindTexture(GL_TEXTURE_2D, depthTextureId);
    }

As mentioned above, replacing bindDepthTexture with bindColorTexture gives the expected result, and so does the glReadPixels at the end. Any tips on what I’m missing would be appreciated!

P.S. I’m aware that this question has been asked all too often on this forum and I’ve gone through them all, and yet I cannot figure out what the problem is.

What do you mean with “won’t read the depth data” ?

Ah, sorry for the ambiguity.

What I mean is, the value read in the FS is always 0. (Pure 0, I’ve verified this. No linearizing shenanigans.)

glReadPixels, on the other hand, gives the proper values (mostly 1, and ~0.3 where my geometry is being drawn).

And no errors (see glGetError) ? Plus the FBO is complete ?

If, for above, everything is fine, then:

  • do you call glReadBuffer(GL_NONE) before attempting to read pixels from your fbo ?
  • what about glGetTexImage. Does it give the expected results ?

And no errors (see glGetError) ?

Yeah, no errors.

Plus the FBO is complete ?

Again, yes.

  • do you call glReadBuffer(GL_NONE) before attempting to read pixels from your fbo ?

No. From what I understand, I have to bind my fbo before reading the depth value, and that is exactly what I do - fbo.bind(), which is a wrapper for glBindFramebuffer(GL_FRAMEBUFFER, id); . Following that, I proceed to do a normal glReadPixels(…, GL_DEPTH_COMPONENT, …) which returns the expected result. Am I missing something here?

  • what about glGetTexImage. Does it give the expected results ?

Yes, it produces results identical to glReadPixels. Here is how I bind my depth texture:


    public void bindDepthTexture(int location) {
        glActiveTexture(GL_TEXTURE0 + location);
        glBindTexture(GL_TEXTURE_2D, depthTextureId);

        // New debugging code
        ByteBuffer buffer = BufferUtils.createByteBuffer(display.getWidth() * display.getHeight());
        glGetTexImage(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, buffer);
        for (int i = 0; i < display.getWidth() * display.getHeight(); i++) {
            int depth = Byte.toUnsignedInt(buffer.get(i));
        }

The values are all 255, with 87 (~0.3 in normalized form) for areas with geometry. Nothing seems wrong here.

If you want to read depth values via glReadPixels(), then the texture must be bound to the FBO and the FBO itself must be bound.

If you want to read depth values directly from the texture within a shader, the texture must not be in use as a framebuffer attachment, i.e. either it must be unbound from the FBO or the FBO itself must be unbound. Reading from a texture which is currently being used as a rendering target is undefined.

If you’re reading a depth texture via a sampler2D uniform, GL_TEXTURE_COMPARE_MODE must be GL_NONE. If GL_TEXTURE_COMPARE_MODE is GL_COMPARE_REF_TO_TEXTURE, you must use a sampler2DShadow uniform instead (and texture() will return the result of the comparison, not the depth value).

If you want to read depth values directly from the texture within a shader, the texture must not be in use as a framebuffer attachment, i.e. either it must be unbound from the FBO or the FBO itself must be unbound. Reading from a texture which is currently being used as a rendering target is undefined.

Before reading from the variable in FS, i already call glBindFramebuffer(GL_FRAMEBUFFER, 0); , which should set the read and write buffer to null. However, for the sake of thoroughness, I also explicitly added the glReadBuffer(0); line before sampling the depth texture in FS… aaand it fails again.

If you’re reading a depth texture via a sampler2D uniform, GL_TEXTURE_COMPARE_MODE must be GL_NONE.

I believe that’s the default anyway? But again, for the sake of completeness, I’ve already tried adding it (even my original source has this option in commented out form after it produced no success), without luck.

Apparently, setting the GL_TEXTURE_MAG_FILTER and GL_TEXTURE_MIN_FILTER to GL_NEAREST is essential for the FS to be able to read depth texture.

And with that, I believe my problem is solved.

Thanks for taking the time to help :slight_smile:

[QUOTE=vampcat;1288696]Apparently, setting the GL_TEXTURE_MAG_FILTER and GL_TEXTURE_MIN_FILTER to GL_NEAREST is essential for the FS to be able to read depth texture.
[/QUOTE]
If a texture doesn’t have all of the mipmap levels defined, it’s necessary for the minification filter to be one which doesn’t use mipmaps (i.e. GL_NEAREST or GL_LINEAR). This is true regardless of whether you’re using shaders or the fixed-function pipeline.

Also, even if all of the mipmap levels exist, if you’re using a texture as a framebuffer attachment, rendering only modifies the attached mipmap level. So unless you’re explicitly generating the other mipmap levels, you probably don’t want to use a minification filter which uses mipmaps.