Using Depth as Texture

Hi,

I am trying to get the depth channel as a texture to render onto a quad. But I’m not having much luck getting it to work.

I am trying to use glCopyTexSubImage2D to get the current depth buffer, but when I render the poly with the texture, the texture does not show up. If I change the format of the texture to GL_RGB, so it reads from the color buffer (code commented out below) everything works fine.

Also, another problem is that when I am attempting to get the depth buffer, the render becomes very slow, though no errors are occurring.

Does anyone know the correct way to do this?

glEnable(GL_TEXTURE_2D);

// no texture yet, so create one
if (data->mGLTexture == -1)
{	
	// some implementation of OpenGL can do theglTexImage2D with a NULL pointer, but some don't. So allocate 
	// data for the texture.
	
	unsigned char * tmp_memory = (unsigned char *)malloc(data->mTextureWidth * data->mTextureHeight * 3);
	if (tmp_memory == NULL)
		exit(1);
	
	glGenTextures(1, &data->mGLTexture);
	glBindTexture(GL_TEXTURE_2D, data->mGLTexture);

	// if the data format is set to GL_RGB, then everything works. When it is set to GL_DEPTH_COMPONENT, the
	// texture does not render. From the documenation I have found, this should work... glCopyTexSubImage2D
	// should recognize the format of the texture and read from the buffer accordingly.

	//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, data->mTextureWidth, data->mTextureHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, (char*)tmp_memory);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, data->mTextureWidth, data->mTextureHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, (char*)tmp_memory);

	if (tmp_memory)
		free(tmp_memory);
}
else
	glBindTexture(GL_TEXTURE_2D, data->mGLTexture);

// copy the frame buffer from the font buffer into the texture
glReadBuffer(GL_FRONT);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, data->mWidth, data->mHeight);

// default environmet and linear texture filtering
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);

// we want to use glTexCoord
glDisable(GL_TEXTURE_GEN_S);
glDisable(GL_TEXTURE_GEN_T);
glDisable(GL_TEXTURE_GEN_R);

// setup the matrix stacks for 2d rendering
glMatrixMode(GL_TEXTURE);						
glLoadIdentity();									

glMatrixMode(GL_PROJECTION);						
glLoadIdentity();									

gluOrtho2D(0, data->mWidth, 0, data->mHeight);

glMatrixMode(GL_MODELVIEW);							
glLoadIdentity();									

// clear the screen
glDrawBuffer(GL_FRONT);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);


glColor4f(1.0f, 0.0f, 0.0f, 1.0f); //renders red when the texture does not work

// we did a subImage, so the texture is bigger than the screen. Change the TexCoords,
// so the texture on the rectangle shows only the capured screen.
// if we choose 0.0 - 1.0 we will see grabage outside the screen.
// the whole problem is missing NonPowerOfTwoTexture support.
float TexCoordMaxX = data->mWidth / (float)data->mTextureWidth;
float TexCoordMaxY = data->mHeight / (float)data->mTextureHeight;
glBindTexture(GL_TEXTURE_2D, data->mGLTexture);

glBegin(GL_POLYGON);
	glTexCoord2f(TexCoordMaxX, 0.0f);
	glVertex2f((float)data->mHalfWidth, 0.0f);

	glTexCoord2f(TexCoordMaxX, TexCoordMaxY);
	glVertex2f((float)data->mHalfWidth, (float)data->mHeight);
	
	glTexCoord2f(0.0f, TexCoordMaxY);
	glVertex2f(0.0f, (float)data->mHeight);
	
	glTexCoord2f(0.0f, 0.0f);
	glVertex2f(0.0f, 0.0f);

glEnd();

Thanks in advance for any help!

Steven Walker
WalkerFX

Don’t know what you have going on elsewhere, but I think you should disable the compare:
TexParameteri(TEXTURE_2D,TEXTURE_COMPARE_MODE,NONE);

I see you’re setting the Read/Draw buffers. I think they should be good to go by default. Perhaps you change them elsewhere?

You can pass null to TexImage2D to init your texture, if you intend to subsequently fill it with CopyTexSubImage.

Perhaps I missed something else.

So I got this code rendering the depth buffer, but it is still tremendously slow (about 1 frame per second). I have a Quadro FX 3450 with the most recent driver 84.26, and yet even rendering a frame sized 320x240 it is very very slow.

Does anyone have any ideas what could be causing this?

FYI, FBOs may not be an option in my case because I’m writing this as a plug-in and do not have control over the original creation of the buffers.

Thanks for the help!

Walker

This setup works for me, and it should be fast:

		glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, x, y, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
		glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE_ARB, GL_LUMINANCE);
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_NONE);