PDA

View Full Version : Only getting 8-bits of precision with depth tex



MalcolmB
11-04-2005, 11:19 AM
I'm using a depth texture in a GLSL program to do some depth-based outlining as a post process. I build the depth texture using glCopyTexImage/SubImage with a GL_DEPTH_COMPONENT parameter (tried GL_DEPTH_COMPONENT24 also)

I've checked the depth of the texture using glGetTexLevelPameter and I get 24 as expected.

I've searched these forums and found that I need to make sure GL_NEAREST filtering is used on the depth texture to get full precision. I've done this but I still only get 8-bits.

Also I've made sure that GL_TEXTURE_COMPARE_MODE is set to GL_NONE.

The depth texture is sampled in a GLSL shader as a sampler2D using texture2D.

I'm running on an Nvidia 6800.

Does anyone know of any other states that may affact the precision when sampling a depth texture?

Thanks

PickleWorld
11-09-2005, 11:39 AM
Question, how do you know you are only getting 8-bits?

I take it you are doing something like...

vec4 depthVec = texture2D(depthImage, gl_TexCoord[0].st);
float depthVal = depthVec.r

MalcolmB
11-09-2005, 03:18 PM
Good question. To test the bit depth I've written a shader that outputs the depth value to the .a value of the color buffer.

When using the value obtained from the .a channel instead of the depth value I get the same results as given by the value in the depth buffer. But, if I render to a 16-bit floating point color buffer, I get the good outlining results I'm hoping for.

jide
11-09-2005, 10:59 PM
So what ? I don't understand what's a problem, what's not a problem and so. Could you explain ?

def
11-10-2005, 12:06 AM
MalcolmB: you are being very vague on what you are doing to test the precision so I can't help you there, but this is my setup which works:


glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24_ARB, Width, Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE_ARB, GL_LUMINANCE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_NONE);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, Width, Height);Using this texture in a shader gives me a 24bit precision depth texture with values from 0.0 to 1.0.

MalcolmB
11-10-2005, 09:47 AM
I'm sorry. I'll try to explain better.
I'm using the depth value to detect edges on objects so I can put black outlines around them (to give it a toon look). In a post process pass I compare the current pixel's depth value with all it's neighbour's depth values and do a difference against a threshold value to determine if it's an edge or not.

Initially I outputted the depth value to the .a channel in a 16-bit float pbuffer and then copied this to a 16-bit float texture. This gave me good outlines.

I then decided to try out a depth texture instead since that is obviously a smarter way to do it. When I did this instead of nice outlines I get what look like depth contour lines (like on a map) on my objects. Now it's hard to know exactly whats going on but my assumption is these lines are where the depth values change from one value to the next. For comparision I went back to my original method of outputting the values to the .a channel in a texture, but this time I used a normal 8-bit per channel texture. This gave the same results as the depth texture. This is why I think I'm only getting 8-bits of precision.

To further test this I've made a shader that takes the resulting depth buffer and does a comparison between the current pixel and one of it's neighbours. If their values are different then I set the output to white, otherwise it's black. On the depth texture I get the same contour lines (most of the image is black) which means very few of the values I'm getting are different.
On the 8-bit texture I get the same contour lines.
On the 16-bit texture I get almost all white, as expected.

Hope I'm clearer now.

def
11-11-2005, 07:33 AM
Originally posted by MalcolmB:
Hope I'm clearer now.We are getting there... ;)


Initially I outputted the depth value to the .a channel in a 16-bit float pbuffer and then copied this to a 16-bit float texture. This gave me good outlines.Are you saying that you are using a shader to render each fragments depth value to the alpha channel and not reading out the framebuffer at all?
If this is the case then there is just a difference in coordinatespace between the two methods.
Try outputting the depth texture to the screen and compare the two methods.

I think the depthtexture values are in screen space while your pbuffer solution gives worldspace depth values.

http://www.opengl.org/resources/faq/technical/depthbuffer.htm

MalcolmB
11-11-2005, 10:13 AM
Well, the value written to the .a channel comes from the gl_FragCoord.z, which is screen space depth. Remember that I get the same results when using an 8-bit pbuffer as I do when I use the depth buffer.

Yes, the pbuffer method and the depth texture method create textures that look the same visually.

MalcolmB
11-16-2005, 01:56 PM
Problem solved. I have learnt on Nvidia Geforce 6800 and Quadro 4500 hardware (I won't vouch for future hardware because who knows what will change) you *must* use these settings to get full precision.

GL_DEPTH_TEXTURE_MODE_ARB = GL_LUMINANCE
GL_TEXTURE_COMPARE_MODE_ARB = GL_NONE
GL_TEXTURE_MIN_FILTER = GL_NEAREST
GL_TEXTURE_MAG_FILTER = GL_NEAREST

depth texture format must be GL_DEPTH_COMPONENT24_ARB.

As Def said, this is what he uses. Thanks for the help everyone.