PDA

View Full Version : Warning : Readpixel depth component value



P88_Razor
01-28-2003, 01:23 AM
Something I just noticed :

// Gives INCORRECT Z value
GLdouble l_dZ;
glReadPixels(a_iX, l_iY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &l_dZ);

// Gives CORRECT Z value
GLfloat l_fZ;
glReadPixels(a_iX, l_iY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &l_fZ);

So always use GLfloat for reading the depth buffer if you want correct values.

Humus
01-28-2003, 02:11 AM
That should be pretty obvious. Of course you can't pass a pointer to a double and tell it to pass a float and expect to get a correct result.

dorbie
01-28-2003, 02:39 AM
Exactly, try specifying a GL_DOUBLE for a double type not a GL_FLOAT. That pointer is a void so there's no concept of intrinsic type other than what you specify in the type token, like GL_FLOAT. That type is the destination type not the source type, OpenGL already knows the source type. There is no function overloading here, OpenGL is a C interface, not C++ and it matters in this respect.

P88_Razor
01-28-2003, 03:03 AM
I just wanted to emphasise this problem.
This mistake is made in several examples I found on the web.

This could save some people a lot of debugging / posting time. http://www.opengl.org/discussion_boards/ubb/wink.gif

NOTE : GL_DOUBLE is an invalid enumerant for the glReadPixels function, for reading the Z buffer only GL_FLOAT is valid !

P88_Razor
01-28-2003, 03:10 AM
Ofcourse it would be cool http://www.opengl.org/discussion_boards/ubb/cool.gif to have a 64bit Z buffer !

V-man
01-28-2003, 06:34 AM
If everything was in 64 float, then it would make sense, but the graphics card does a significant number of operations in 32 bit float, and the rest is integer or bytes.

WhatEver
01-28-2003, 01:26 PM
Whoa, a 64bit Z buffer would ROCK!