Useful depth texture reads

Could someone perhaps outline the different methods for writing/reading a depth texture (note that I’m NOT looking for comparisons, I want the depth value) with as as much precision/consistency as possible?

err, to avoid misunderstanding, I just want to sample the depth in the fragment shader.

From the specs of ARB_shadow:

Let Dt (D subscript t) be the depth texture value, in the range
[0, 1]. Let R be the interpolated texture coordinate clamped to
the range [0, 1]. Then the effective texture value Lt, It, or At
is computed by

if TEXTURE_COMPARE_MODE_ARB = NONE

  r = Dt

For writting, use glCopyTexSubImage or FBO.

Hi, I’m currently looking for the same informations! Someone says that you can only compare, not sample, others that you can sample, but only an 8-bit value (not the full precision depth value).

I would be quite disappointed if this was true: so many cool effects require this value!

I can’t say if it’s only sampled in 8 bits apart of the fact that it looks more than a simple 256 grayscaled image. However, why would it be limited ?
Anyway, this seems to work for me.

As soon as the texture parameters are set, simply put that code in your fragment:

gl_FragColor = shadow2DProj (sample,tex_coord);

You can use glCopyTexSubImage2D() to read back 32bit precision depth values. Setup the texture as mentioned in this thread:

http://www.opengl.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=3;t=013943;p=1

Thanks for the replies.

Btw, it samples 8-bit on this ati radeon 9600 pro, it seems you may only use floating point render targets if you want to read depth values at higher than 8-bit.

Try toying with the DEPTH_TEXTURE_MODE_ARB state. Change that from LUMINANCE to INTENSITY to ALPHA. I believe INTENSITY gives a full precision value on NVidia cards. For ATI cards, I’m not sure… if none of those gives you full precision, then it just might not be possible to get.

Kevin B

Unfortunately none of those work.

Depth textures should be 16bit on ATI. How are you determining that you only get 8bits?

Well, I can’t say for certain if it’s 8 or 16 bits, the banding is pretty horrifying, but you’re probably right.

If you’re doing a visual inspection keep in mind that you’re writing to an 8bits/channel framebuffer.

Of course, but when doing something with the depth, the difference between an operation on depth not read from texture and depth read from texture becomes obvious. In this case, if it’s 8 or 16 bit doesn’t really matter, it’s still not useful.

I just didn’t take the care to do establish the depth precision more technically.

Also, does this mean that when using depth comparison funcs, we’re getting 16 bit vs 24 bit compares?

Another issue I have is when trying to write to gl_FragDepth, it looks like it is writing at the same precision that it’s reading from depth textures, even when I don’t use depth textures.

To Humus:
Finally, I’d like to thank you for clearing up a few things, I’ve had a huge number of mixed replies. Not a single one actually reflecting what testing showed.

Depth comparison is done at the precision you have in the fragment shader, which is 32bit for X1K series and 24bit for earlier hardware, but of course if the source data is lower precision there’s no magical solution to that.

What do you mean with the gl_FragData problem? If you read in data you of course don’t get any more precision when you write it out again.

Originally posted by def:
[b] You can use glCopyTexSubImage2D() to read back 32bit precision depth values. Setup the texture as mentioned in this thread:

http://www.opengl.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=3;t=013943;p=1 [/b]
I’m late to this party, but I followed the instructions both here and in the referred-to thread and am still seeing harshly aliased depth values on my FX Go 5200 ( running on OS X, 10.4.7 ).

Is this a limitation of an – admittedly – terrible graphics card?

Or, if it should work, what should I be using in my GLSL code to sample it? I’m using a sampler2DRect and texture2DRect to perform the lookup. Is that a sure-fire way to cause aliasing? I’ve never used sampler2DShadow orshadow2D so I’m not certain what it would take to convert my code – other than squishing my non-POT texture into, say, 512x512 and sampling that since the shadow methods don’t seem to have rect variants.

Right now my lookup in glsl is just a call as such:

float d = texture2DRect( depthMap, gl_FragCoord.st ).r;

I’ve copied, to the letter ( and made some variations ), the suggestions in the thread linked above, and am still seeing the aliasing.

ANy suggestions? This is very frustrating!

EDIT: I found that the shadow methods do have rect variants, but the result is still aliased, for me.

First of all you should check your actual depth precision with:

GLint params;
glGetTexLevelParameteriv(GL_TEXTURE_RECTANGLE, 0, GL_TEXTURE_DEPTH_SIZE_ARB, &params);
 

No, you do not need to use the shadow samplers for this.
Have you checked your depthrange? If this is set unapropriate, or if you have a very large scale scenery the precision might seem to be less than it actually is. Increase the near z clipping and see what happens.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.