High precision depth read on Radeon 9000

I’ve been able to read the depth buffer using glReadPixels and passing in GL_DEPTH_COMPONENT, but the precision somewhat sucks and I tried using GL_DEPTH_COMPONENT16 and 24, but i get a blank screen. I know that GL_ARB_shadow is not supported on my card, but is there any way to get more precision when reading the depth buffer if you can’t use GL_DEPTH_COMPONENT16 or 24?

Using sized formats in ReadPixels is an INVALID_ENUM error.
You need to use the type argument. Take your pick, but I’ll recommend GL_FLOAT. It’s the best performing option on R200, I somewhat assume it’ll be the same on RV250.

float* stuff=(float*)malloc(width*height*sizeof(float));
glReadPixels(x,y,width,height,GL_DEPTH_COMPONENT,GL_FLOAT,stuff);

I am using GL_FLOAT, but I’m not sure if it’s actually storing the values using 16 bits or 8bits. Does it automatically store it as 16 or more bits when you use GL_FLOAT? Cause when I render the depth map to a flat square, I can’t differentiate very well between the colors of gray. It seems it like one solid gray color for a complex object that spans pretty far in each direction. Any ideas?

I’m prety sure GL_FLOAT needs more work to convert from native depth buffer bits to IEEE floating points. GL_UNSIGNED_INT should be faster in general (or the driver is amiss).

Read the glReadPixels manual. The float values you get back with a depth component glReadPixels are scaled to lie between 0 and 1 and are in IEEE 32 bits float format.

If you mean the number of depth bits in the depth buffer, check the pixelformat you selelcted or call glGetIntegerv(GL_DEPTH_BITS, …)

Originally posted by oconnellseanm:
Cause when I render the depth map to a flat square, I can’t differentiate very well between the colors of gray.

Even if the depth map is being stored to full precision, rendering it to an 8 bits per component display will mean that you cannot see differences.