ReadPixels( DEPTH, FLOAT ) - Strange results

Here’s what I’m seeing on a 24-bit depth buffer with glReadPixels( GL_FLOAT ):

0xffffff = 1.0
0xfffffe = 1.0
0xfffffd = 0.999999940395
0xfffffc = 0.999999880791
0xfffffb = 0.999999821186

0x000001 = 0.000000059605
0x000000 = 0.000000000000

NOTE: I’ve verified the fixed-point value via glReadPixels( GL_UNSIGNED_INT_24_8 ).

Check me on this but I believe the correct fixed-point to 0…1 normalized float conversion is:

float = fixed / ( 2^24-1 )

However, I can only explain these results (down to the last digit I might add!) if I use this instead:

float = min( 1.0, fixed / ( 2^24-2 ) )

Before I submit a bug report, is the former definitely correct? Or is there something subtle here I’m unaware of.

(Why I care: doing some statistics crunching on the depth buffer, I’m getting different 0…1 results between reading the depth texture in a shader vs. reading it using ReadPixels FLOAT on the CPU [for debug purposes]. Looks like I get the correct values in the shader [not the above]. And with perspective, one depth step out toward far clip is a non-trivial distance – potentially several meters!)

NVidia 260.19.44 drivers.

It’s probably your projection matrix. Perspective projection doesn’t have a linear mapping of z values. Most of the depth precision is at the front of the front clip plane.

No, I’m certain that’s not it. There’s no projection matrix involved.

I’m setting these values using glClearDepth/glClear, then reading them back using both glReadPixels( GL_UNSIGNED_INT_24_8 ) and ( GL_FLOAT ), and comparing the results.

Here’s a short GLUT test program that illustrates the problem:
<div class=“ubbcode-block”><div class=“ubbcode-header”>Click to reveal… <input type=“button” class=“form-button” value=“Show me!” onclick=“toggle_spoiler(this, ‘Yikes, my eyes!’, ‘Show me!’)” />]<div style=“display: none;”>

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define GL_GLEXT_PROTOTYPES 1
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>

const int RES = 512;

//-----------------------------------------------------------------------

void checkGLErrors( const char *s )
{
  while ( 1 )
  {
    int x = glGetError() ;

    if ( x == GL_NO_ERROR )
      return;

    fprintf( stderr, "%s: OpenGL error: %s
", 
             s ? s : "", gluErrorString ( x ) ) ;
  }
}

//-----------------------------------------------------------------------

void doTest()
{
  //-----------------------------------------------------
  // Clear depth texture and check values
  //-----------------------------------------------------

  // Create depth texture (24-bit fixed)
  GLuint depth_tex;
  glGenTextures  ( 1, &depth_tex );
  glBindTexture  ( GL_TEXTURE_2D, depth_tex );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER  , GL_NEAREST );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER  , GL_NEAREST );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S      , GL_CLAMP_TO_EDGE );
  glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T      , GL_CLAMP_TO_EDGE );
  glTexImage2D   ( GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, RES, RES, 0, 
                   GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, 0 );

  // Create/bind FBO to write depth texture
  GLuint depth_fbo;
  glGenFramebuffers( 1, &depth_fbo );
  glBindFramebuffer( GL_FRAMEBUFFER, depth_fbo );
  glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, 
                          GL_TEXTURE_2D, depth_tex, 0 );

  // Clear to next-to-last depth step
  //   NOTE: 0xffffff = 2^24-1.  
  //glClearDepth( float( 0xffffff ) / float( 0xffffff ) );
  glClearDepth( float( 0xfffffe ) / float( 0xffffff ) );
  //glClearDepth( float( 0xfffffd ) / float( 0xffffff ) );
  glClear( GL_DEPTH_BUFFER_BIT );

  // See what "integer" depth value is in the framebuffer
  //   NOTE: It should be 0xfffffe.  
  //   AND : It is (on NVidia 260.19.44).
  GLuint pval;
  glPixelStorei( GL_PACK_ALIGNMENT, 1 );
  glReadPixels( 0, 0, 1, 1, GL_DEPTH_STENCIL, 
                GL_UNSIGNED_INT_24_8, &pval );
  printf( "ReadPixels DEPTH uint  = 0x%.6x
", pval >> 8 );

  // NOW see what 0..1 "normalized" depth value the driver says is there.
  //   NOTE: Should be (2^24-2)/(2^24-1) = 0xfffffe/0xffffff = 0.999999940395
  //   BUT!: It's 1.000000000000 (on NVidia 260.19.44)
  float fval;
  glReadPixels ( 0, 0, 1, 1, GL_DEPTH_COMPONENT, 
                 GL_FLOAT, &fval );
  printf( "ReadPixels DEPTH float = %.12f
", fval );

  checkGLErrors( "doTest - END" );
}

//-----------------------------------------------------------------------

int main ( int argc, char **argv )
{
  // Init GL context
  glutInit            ( &argc, argv ) ;
  glutInitDisplayMode ( GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE ) ;
  glutInitWindowSize  ( 500, 500 ) ;
  glutCreateWindow    ( "GL Test" ) ;

  doTest();

  //glutMainLoop () ;
  return 0 ;
}

[/QUOTE]</div>

I ran your code on an ATI HD 3300. I had to a couple of problem: the FBO wasn’t complete, since it had no color attachment. I got a GL error when attempting to run it. So I had to call glDrawBuffer(GL_NONE) and glReadBuffer(GL_NONE) after setting up the FBO.

But otherwise, the values come out to the correct ones. The depth is 0xfffffe, and the corresponding float is 0.999999940395. This may be NV-specific.

Thanks for the cross-check. Just submitted a problem report on this to NVidia (after making the DrawBuffer/ReadBuffer change).