View Full Version : Warning : Readpixel depth component value
01-28-2003, 12:23 AM
Something I just noticed :
// Gives INCORRECT Z value
glReadPixels(a_iX, l_iY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &l_dZ);
// Gives CORRECT Z value
glReadPixels(a_iX, l_iY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &l_fZ);
So always use GLfloat for reading the depth buffer if you want correct values.
01-28-2003, 01:11 AM
That should be pretty obvious. Of course you can't pass a pointer to a double and tell it to pass a float and expect to get a correct result.
01-28-2003, 01:39 AM
Exactly, try specifying a GL_DOUBLE for a double type not a GL_FLOAT. That pointer is a void so there's no concept of intrinsic type other than what you specify in the type token, like GL_FLOAT. That type is the destination type not the source type, OpenGL already knows the source type. There is no function overloading here, OpenGL is a C interface, not C++ and it matters in this respect.
01-28-2003, 02:03 AM
I just wanted to emphasise this problem.
This mistake is made in several examples I found on the web.
This could save some people a lot of debugging / posting time. http://www.opengl.org/discussion_boards/ubb/wink.gif
NOTE : GL_DOUBLE is an invalid enumerant for the glReadPixels function, for reading the Z buffer only GL_FLOAT is valid !
01-28-2003, 02:10 AM
Ofcourse it would be cool http://www.opengl.org/discussion_boards/ubb/cool.gif to have a 64bit Z buffer !
01-28-2003, 05:34 AM
If everything was in 64 float, then it would make sense, but the graphics card does a significant number of operations in 32 bit float, and the rest is integer or bytes.
01-28-2003, 12:26 PM
Whoa, a 64bit Z buffer would ROCK!
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.