PDA

View Full Version : depth buffer copy



stanleyw
01-22-2004, 03:47 PM
I am trying to shift my depth buffer down.
The following code works when my readpixels/drawpixels funtions use type GL_FLOAT with float buffer, but the glReadPixels is to slow.

Using the type GL_UNSIGNED_SHORT the readpixels function is fast but something does not work. Does anybody know if this code should work with GL_UNSIGNED_SHORT.

glRasterPos3f( 0.0f, 0.0f, 1.f );
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);

glReadPixels(
fXOffset,
fYOffset,
m_PBuffer.nWidth-fXOffset,
m_PBuffer.nHeight-fYOffset,
GL_DEPTH_COMPONENT ,
GL_UNSIGNED_SHORT,
g_pDepthMap );

glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);


TRACE( "!PBUF: depth read\n", m_nXOffset, m_nYOffset );

glReadPixels(
fXOffset,
fYOffset,
m_PBuffer.nWidth,
m_PBuffer.nHeight,
GL_RGBA,
GL_UNSIGNED_BYTE,
g_pByteMap );


glClearDepth( 0.0f );
glClear( GL_DEPTH_BUFFER_BIT );
glClear( GL_COLOR_BUFFER_BIT );

glDrawPixels(
m_PBuffer.nWidth,
m_PBuffer.nHeight,
GL_RGBA,
GL_UNSIGNED_BYTE,
g_pByteMap );

TRACE( "!PBUF: finished copy\n", m_nXOffset, m_nYOffset );
glRasterPos3f( 0.0f, 0.0f, 0.f );

glEnable( GL_DEPTH_TEST );
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthFunc( GL_ALWAYS );
glDrawPixels(
m_PBuffer.nWidth-fXOffset,
m_PBuffer.nHeight-fYOffset,
GL_DEPTH_COMPONENT ,
GL_UNSIGNED_SHORT,
g_pDepthMap );

glDepthFunc( GL_GEQUAL );

glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);

TRACE( "!PBUF: finished depth draw\n", m_nXOffset, m_nYOffset );


glEnable( GL_DEPTH_TEST );


I am using a nvidia GFFX 5700 ultra.

The above code is drawing in a PBuffer.

Thanks,

Relic
01-23-2004, 02:54 AM
Think of all glRead- and glDrawPixels to work in the floating point range of 0.0 to 1.0 (except on float buffers).
That means all formats are converted to that range.

If you have a 16 bit depth buffer you code does nothing to the depth values and should work.
If you have a 24 bit depth buffer you have lost the 8 least significant bits in the read and the glDrawPixles shifts the 16 bits to the most significant bits.
Analogously for 32 bit depth buffers where 16 bits are lost.

Use GL_UNSIGNED_INT to keep the full precision in case the GL_DEPTH_BITS are greater than 16, and glDraw it again with GL_UNSIGNED_INT.

Check the glReadBuffer and glDrawBuffer settings.

BTW, there is a glCopyPixels, too. http://www.opengl.org/discussion_boards/ubb/wink.gif

[This message has been edited by Relic (edited 01-23-2004).]