I have a test program that renders a scene to a depth texture (GL_DEPTH_COMPONENT24). I teture a quad with the texture and the result is a gray scale image of the depth buffer. How is the depth stored in the texture? I am assuming that if it is a 24 bit depth texture, then 8 bits are in each of RG and B… Is it as simple as
Scaled range value = red<<16 + green<<8 + blue
or something similar?
CD
A depth-component texture is not stored as an RGB texture. It’s basically a high-precsion (16 or more bits/texel), single-channel texture format.
GL_DEPTH_TEXTURE_MODE controls how a depth texture is mapped to luminance, intensity or alpha if you try to utilize it as a color texture. Think of the depth values as grayscale values in that case.
So… if I use a GL_DEPTH_COMPONENT24 type texture to texture a quad for example, then I would see a gray scale image of the depth buffer on my quad… With R=G=B=Depth clamped to [0,255]?
CD
Originally posted by Brian Paul:
[b] A depth-component texture is not stored as an RGB texture. It’s basically a high-precsion (16 or more bits/texel), single-channel texture format.
GL_DEPTH_TEXTURE_MODE controls how a depth texture is mapped to luminance, intensity or alpha if you try to utilize it as a color texture. Think of the depth values as grayscale values in that case. [/b]
It’s not clamped, it’s scaled down. You’ll get the 8 most significant bits.