depth buffer read

i’m reading in the depth buffer after rendering a simple quad to the screen and I’m getting HUGE values, 3452816845 for instance. here’s the exact code

glClearColor (0.0f, 0.0f, 0.0f, 0.5f);					
glDepthFunc (GL_LEQUAL);	
glEnable (GL_DEPTH_TEST);	
glShadeModel (GL_SMOOTH);		glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0, 2, 2, 0, 0, 0, 0, 1, 0);
glBegin(GL_QUADS);

glVertex3f(-1, 0, -1);
glVertex3f(1, 0, -1);
glVertex3f(1,0,1);
glVertex3f(-1,0,1);

glEnd();

glFlush();

unsigned float *buff = new unsigned float[256*256];
glBindTexture(GL_TEXTURE_2D, depthtexture);
glTexImage2D(GL_TEXTURE_2D,0, GL_DEPTH_COMPONENT,256,256,0,GL_DEPTH_COMPONENT,GL_FLOAT,buff);

gl(0,0,256,256,GL_DEPTH_COMPONENT,GL_UNSIGNED_float,buff);

It doesn’t seem to return between 0 and 1. I want to draw the depth buffer to the screen, but all i see is white since the values are so big. Any ideas? By the way, the window dimensions are 256x256.

I am curious, where exactly are you reading the values from the buffer?

I assume you have a typo and are using glReadPixels to read the depth buffer.

You need to read the pixels into buff before calling glTexImage2D.

Most importantly, and this caused me issues in a prior post, don’t forget that the depth buffer is typically a 24-bit float, but when you read it you are reading it into a 32-bit float. I think the extra padded byte is set to FF - see my post from earlier today…

Good Luck

The weird number you posted is 0xCDCDCDCD, looks a lot like some memory clear value of a debug malloc to me.
That would mean that you’re not reading anything. Check for GL errors.

glReadPixels(0,0,256,256,GL_DEPTH_COMPONENT,GL_FLOAT,buffer);

^^ works for me.

azcoder,
There will be no padding. The results will be plain 32 bit floats, ranging from 0.0f to 1.0f. The data conversion is automatic.

Thanks a lot guys. This code now displays the correct values

float buff = new float[256256];
glBindTexture(GL_TEXTURE_2D, depthtexture);
glReadPixels(0,0,256,256,GL_DEPTH_COMPONENT,GL_FLOAT,buff);
glTexImage2D(GL_TEXTURE_2D,0, GL_DEPTH_COMPONENT,256,256,0,GL_DEPTH_COMPONENT,GL_FLOAT,buff);

But I’m still trying to texture this depth image to a quad and all I see is white, although the values are now between 0 and 1. I output them to a file. I tried using this to texture the quad

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, depthtexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_NONE);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE_ARB,GL_LUMINANCE);

glBegin(GL_QUADS);
	glTexCoord2f(0,0);
	glVertex3f(-1,1,0);
	glTexCoord2f(1,0);
	glVertex3f(1,1,0);
	glTexCoord2f(1,1);
	glVertex3f(1,-1,0);
	glTexCoord2f(0,1);
	glVertex3f(-1,-1,0);
glEnd();

I had read in a previous post i had to call glTexEnvi 2 times with those parameters, but I still only see a white quad. Any ideas? I suppose I could just make an RGB texture from the data, but that would be extremely slow.

Try setting up some parameters for your texture (just after your glTexImage2D call):-

glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_NEAREST);

:slight_smile: :slight_smile: :slight_smile: :slight_smile:

:-)))))

THANK YOU!!! It works now. I had to change the GL_DEPTH_COMPONENT to GL_LUMINANCE, but after those glTexParameter calls it displays perfectly. Thanks a lot.