How to Access FBO Depth Texture in GLSL

Hello,

I am having great difficulties getting a shader to read a depth texture from an FBO. I always get a constant depth value, the same result I get when no texture is bound.

Before using it, I detached it from the FBO, and unbound the FBO.

I created it like this:

// Create depth texture with stencil    glGenTextures(1, &m_depthTextureID);
    glBindTexture(GL_TEXTURE_2D, m_depthTextureID);


    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, m_width, m_height, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL);


    // Make it readable in a shader
    //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    //glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    //glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
    //glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);
    //glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
    //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
    //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);


    glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_depthTextureID, 0);


    glBindTexture(GL_TEXTURE_2D, 0);

In the shader, it is declared as:

uniform sampler2D gDepth;

and read like this:

texture2D(gDepth, uv).x;

I tried all sorts of combinations of settings, but I always get just one constant value from the depth texture.
It isn’t a linearization problem, since I am comparing it to projected z coordinates.

How does one properly read a depth texture in a shader?

Thank you for any help you can offer.

If you want automatic depth comparison you have to use one of the shadow samplers, i.e. in your case “sampler2DShadow”. It will do the comparison automatically for you which in many cases results in better performance. Though in that case you really have to set the GL_TEXTURE_COMPARE_MODE and GL_TEXTURE_COMPARE_FUNC parameters of your texture/sampler.
Of course, you can directly fetch the depth value in the way how you try to do it and it actually should work, but in both cases don’t forget to set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER to filtering modes that doesn’t rely on the existence of mipmaps (i.e. to GL_NEAREST or to GL_LINEAR) otherwise your texture will be incomplete and it won’t work.

Thank you for your reply.
I am using it for SSAO, so I don’t really want an automatic comparison. The problem is that the texture won’t read at all, it is as if I never bind one.
The texture has the stencil buffer in it, does that affect the way it reads the depth from it?

EDIT: I forgot to mention: I am certain the rest of the SSAO code is correct because it works when I project positions that the position buffer from my deferred renderer provides instead of using a depth texture.
However, this projection is expensive, so I want to avoid it by using the depth buffer.

Have you dumped the depth buffer texture to a file or rendered it as a texture to the screen to know what is in it. I dump mine to a format called ppm which is almost a raw dump but most paint programs can view.

I use it as a depth buffer for all my rendering, so it has something in it. Also, using glReadPixels returns the depth values properly.

Are you checking for GL errors?

If so, sounds like it’s time to post a small GLUT test program that illustrates your problem.