glDrawPixels / writing depth values with depth function GL_LESS

I’ve asked this at the beginners forum; but maybe it better fits here…

So I want to write depth values into the z-buffer using glDrawPixels and GL_DEPTH_COMPONENT. If I use the depth function GL_ALWAYS, all goes as expected. However, If I use any other depth function (for example GL_LESS, GL_GEQUAL), no depth values ever get written. I do not understand why. For glDrawPixels / GL_DEPTH_COMPONENT, is depth testing supposed to give undefined results when the depth function is not GL_ALWAYS?

flo

I’m not sure how glDrawPixels behaves with GL_DEPTH_COMPONENT, but with color channels, it does not draw directly into the frame buffer, but instead generate fragments that are passed through the GL-pipe, just like any other fragment. And this includes depth testing (depth taken from the raster position). Now, I assume the same thing with depth components, but now the depth is taken from the source data instead. So maybe you don’t get what you want simply becuse the fragments generated don’t pass the depth test.

Bob, your description is what I thought writing depth values with glDrawPixels was supposed to do.

However, it seems that my problem is more subtle than I first thought. In short, I obviously have a depth precision issue. As I just found out, in my program, fragments are generated and they pass the depth test, also with the depth function GL_LESS. However, the written depth values do not match the depth values from main memory at all, instead they seem to be slightly to big consistently. The depth function that I use later for shading is GL_EQUAL (that’s unfortunately necessary), so I see nothing.

During the glReadPixels / glDrawPixels call, GL_DEPTH_SCALE is 1.0 and GL_DEPTH_BIAS is 0.0, so this should not be the problem.

I assume that glDrawPixels with depth function GL_ALWAYS uses a different driver path than with the depth function GL_LESS (GL_LESS is much slower by the way, so this assumption seems to be reasonable). With GL_ALWAYS, written depth values are exact, with GL_LESS, they are not exact.

I have a GeForce3TI. Could someone from NVidia confirm this issue? Any chance of future drivers that fix this problem?

I presume that the data you pass to glDrawPixels() is an array of floats? Don’t forget that your HW Z-buffer is only 24-bit, so you always lose some precision there due to conversion to the HW’s native format.

– Tom

No, I’m using GL_UNSIGNED_INT. This seems to be best fitting format (for a 24-bit depth buffer), as depth values are fixed point values between 0 and 1. Also note that with the depth function GL_ALWAYS, everything works as expected, so the internal format seems not to be the problem here.

flo