Drawing depth texture bound to a FBO

Hey All,

Sorry if this subject has been covered a lot, I’ve done searches for a few days now and have looked at many examples (even ones with source code) and I still haven’t been able to figure out why I can’t get this texture to display anything.

So, first some back story. I’m just trying to write a simple shadow mapping demo using no fixed function pipeline calls. I’ve been using the “OpenGL SuperBible 5” as my guide. So far, I’ve understood everything and I understand conceptually shadow mapping, I’m just having a heck of a time implementing it.

My scene is simple, there is a torus and a “floor” and the light will be rotating about the torus, but that’s not really my problem. The problem that I’m running into is that, after my first render pass (where I want to bind to the FBO and render the depth of the scene to a texture) I take the texture that is supposed to contain the depth information of the scene and render it to a quad. Unfortunately the screen is just black.

Again, I’m just trying to render the texture that is supposed to be the “depth texture” of the scene and nothing appears; the screen is all black. If I use a completely different texture (like one read in from a jpeg/tga/what have you) the quad shows that texture.

So, from that I’ve deduced that either, depth information is not being written to my “depth texture” or somehow my shader isn’t reading from the texture properly.

Below is some of the code with some commentary on the code below the code snippets.

First is a function which initializes the FBO and texture some if it I borrowed from some examples/tutorials but I believe I understand most of what each call is doing.


void generateShadowFBO()
{
	GLenum FBOstatus;

	// Create a shadow texture which is populated with the depth component
	glGenTextures(1, &depthTextureId);
	glBindTexture(GL_TEXTURE_2D, depthTextureId);

	// GL_LINEAR does not make sense for depth texture. However, next tutorial shows usage of GL_LINEAR and PCF
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	
	// Remove artefact on the edges of the shadowmap
	glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
	glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );

	// TODO*: Have the depth precision be settable
	glTexImage2D(
		GL_TEXTURE_2D,
		0,
		GL_DEPTH_COMPONENT,
		depthTextureWidth,
		depthTextureHeight,
		0,
		GL_DEPTH_COMPONENT,
		GL_UNSIGNED_BYTE,
		NULL);
	glBindTexture(GL_TEXTURE_2D, 0);


	// Create a FBO
	glGenFramebuffers(1, &shadowFboId);

	// Bind the frame buffer to the curent context and tell openGL that
	// the FBO won't be writing any color information and bind the texture ID
	// to the texture component of the FBO
	glBindFramebuffer(GL_FRAMEBUFFER, shadowFboId);

	// Instruct openGL that we won't bind a color texture with the currently binded FBO
	glDrawBuffer(GL_NONE);
	glReadBuffer(GL_NONE);

	// Set the texture to be at the depth attachment point of the FBO
	glFramebufferTexture2D(
		GL_FRAMEBUFFER,
		GL_DEPTH_ATTACHMENT,
		GL_TEXTURE_2D,
		depthTextureId,
		0);
	
	// check FBO status
	FBOstatus = glCheckFramebufferStatus(GL_FRAMEBUFFER);
	if( FBOstatus != GL_FRAMEBUFFER_COMPLETE )
		printf("GL_FRAMEBUFFER_COMPLETE failed, CANNOT use FBO
");
	
	// switch back to window-system-provided framebuffer
	// and unbind the texture
	glBindFramebuffer(GL_FRAMEBUFFER, 0);
}

Seems relatively straight forward, but maybe I’m missing something?

Now for some rendering code. First is the calls to set up the rendering to the FBO (and to the depth texture I’m assuming). The set up of the camera, etc. is correct as if I do not render to the FBO (take out the glBindFramebuffer calls and clear the depth and color bit) then the scene “renders” fine (it shows a flatly colored torus and floor).

Keep in mind I use a lot of GLTools related abstractions to speed things up, but I don’t believe those are a problem as I mentioned before that I am able to render other textures just fine, just not the one I’m supposedly writing depth information to.



// Clear depth buffer
	glClear(GL_DEPTH_BUFFER_BIT);

	glBindFramebuffer(GL_FRAMEBUFFER, shadowFboId);
	// Write depth buffer
	modelViewMatrix.PushMatrix();
		// Set the "camera" to be at the light's position
		lightFrame.GetCameraMatrix(mCamera);
		modelViewMatrix.MultMatrix(mCamera);

		updateViewFrustumAndProjectionMatrix(depthTextureWidth, depthTextureHeight);

		static GLfloat color[] = { 0.0f, 0.0f, 1.0f, 1.0f };
		shaderManager.UseStockShader(
			GLT_SHADER_FLAT,
			transformPipeline.GetModelViewProjectionMatrix(),
			color);
		drawFloorAndTorusGeometry();

	modelViewMatrix.PopMatrix();

	// Reset frame buffer to read/write to the default window
	glBindFramebuffer(GL_FRAMEBUFFER, 0);

Again, seems straight forward. I bind the framebuffer for reading and writing (I know I only need for writing…maybe I should bind it to GL_DRAW_FRAMEBUFFER instead?). I then render my scene using the “flat shader.”

Yes I know the flat shader might be a little overkill since it involves colors and yes I plan on writing my own super simple shader in the future, but I don’t think that is the problem. As mentioned before, if I don’t bind the frame buffers, the scene renders “correctly.”

So the next bit of code makes a quad, binds the texture I want to use and the shaders do the fetching of texture coordinates, etc. Again, I don’t think the shaders are an issue as other textures work fine, just not the texture I want to use (which is supposedly being written to above).



// Clear dpeth and color for "from camera" render pass
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

modelViewMatrix.PushMatrix();
		viewFrustum.SetOrthographic(
			0,
			windowWidth,
			0,
			windowHeight,
			1.0f,
			20.0f);
		projectionMatrix.LoadMatrix(viewFrustum.GetProjectionMatrix());
		modelViewMatrix.LoadIdentity();

		// Bind the depth texture
		glBindTexture(GL_TEXTURE_2D, depthTextureId);

		// Move back 1 unit so we can see the quad
		modelViewMatrix.Translate(0, 0, -1);

		shaderManager.UseStockShader(
			GLT_SHADER_TEXTURE_REPLACE,
			transformPipeline.GetModelViewProjectionMatrix(),
			0);
		quadBatch.Draw();

		glBindTexture(GL_TEXTURE_2D, 0);

	modelViewMatrix.PopMatrix();

I don’t there there is any issue with the “GLBatch” that I use to draw the quad because, as mentioned before, using a different texture works just fine with the same code.

So now I’m stuck, my screen is black and I don’t even know where to begin debugging. How can I check the pixels in the texture? I’ve tried using glReadPixels after binding the texture with the format GL_DEPTH_COMPONENT but that data doesn’t appear to be all 0s at all.

Any help is greatly appreciated even if it is just telling me how to debug any of this as I’m really just stuck. How can I see if the texture has the correct data short of rendering it on some geometry?

-EncodedNybble

I’m just trying to render the texture that is supposed to be the “depth texture” of the scene and nothing appears; the screen is all black.

I don’t think you are doing anything wrong. The depth values written to the depth buffer are non-linear and are probably written OK to the texture. It’s just not showing up very well when displaying as a quad. I know, I have done the exact same thing.
Try increasing the scene depth of the rendered objects so that the depth buffer has a much large range of depth values in the texture. Then you’ll need to write a shader to render the depth texture by sampling the texels and converting to a linear depth value so that the range can be see more clearly on the screen.

Ok maybe I’m doing something wrong then. I’ve tried your feedback and still nothing. I’ll give a little more information.

The scales of my geometry are as follows. I have a torus which is roughly 0.5 unites tall/wide. There is a floor which is 20 units tall and wide below the torus about 0.6 units below the torus.

I have my light source pointing to the center of the torus and rotating about the center of the torus at a radius of 2 units.

My perspective view frustum function is as follows


void updateViewFrustumAndProjectionMatrix(int width, int height)
{
	viewFrustum.SetPerspective(
		45.0f,
		float(width) / float(height),
		1.0f,
		100.0f);
	projectionMatrix.LoadMatrix(viewFrustum.GetProjectionMatrix());
}

So, given your feedback I’ve changed the perspective projection to have zNear = 1 and zFar = 10 so that the torus at 2 and the floor covering 20 units will cover the full range of depth. Doing that still results in blackness.

Is there a way for me to see what depth values are written to the texture?

I guess it’s possible that the shaders I’m using aren’t correctly reading the texture information from the “depth texture” but do correctly read in the data from the other TGA based textures I use.

The shaders are as follows and are different then other shaders I’ve seen in some shadow mapping examples.

Vertex shader


uniform mat4mvpMatrix;
attribute vec3 vVertex;
attribute vec2 vTexCoord0;
varying vec2 vTex;

void main(void)
{
    vTex = vTexCoord0;
    gl_Position = mvpMatrix * vec4(vVertex, 1.0);
};

Seems straight forward. Sets the output vTex to be the vertex’s texture coordinate and the position of the vertex to be the MVP matrix * the input vertex.

This code seems different than other examples I’ve seen with the shadow mapping examples as the output here is a vec2 while other examples I’ve seen output a vec4 though I wasn’t quite able to follow those examples too well.

Fragment Shader:


varying vec2 vTex;
uniform sampler2D textureUnit0;

void main(void)
{
    gl_FragColor = texture2D(textureUnit0, vTex);
};

Again seems straight forward, sets the frag color to be what is defined in the texture. Does the “depth” texture store RGBA values in it or is the depth just a particular component of it?

Ok, well I’m going to experiment with different vertex shaders and/or glReadPixels and hope I can come up with something, but any feedback on how to debug a texture in general or a “depth texture” specifically would be greatly appreciated.

Alright, I took a break and am now revisiting. I still can’t figure this out, rendering the “depth texture” is STILL TURNING ALL BLACK :frowning:

I’ve even made my own shaders (which was good practice), both of quick are below.

Vertex:


uniform mat4 mvpMatrix;

in vec3 vVertex;
in vec2 vTexCoord0;

out vec2 vTex;

void main(void)
{
  // Simply pass the interpolated texture coordinate to pixel shader
  vTex = vTexCoord0;

  // Convert from object space to screen space
  gl_Position = mvpMatrix * vec4(vVertex, 1.0);
}

Fragment Shader:


in vec2 vTex;
uniform sampler2D depthSampler;

void main()
{
  gl_FragColor = texture2D(depthSampler, vTex);
}

I’ve also updated the render loop code to be



	// Clear depth buffer
	glClear(GL_DEPTH_BUFFER_BIT);

	// Enable the FBO for writing
	glBindFramebuffer(GL_FRAMEBUFFER, shadowFboId);

	// Write depth buffer
	modelViewMatrix.PushMatrix();
                // Do some code to draw scene from light position
	modelViewMatrix.PopMatrix();


	// Reset frame buffer to read/write to the default window
	glBindFramebuffer(GL_FRAMEBUFFER, 0);

	// Clear dpeth and color for "from camera" render pass
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	modelViewMatrix.PushMatrix();
		viewFrustum.SetOrthographic(
			0,
			windowWidth,
			0,
			windowHeight,
			1.0f,
			20.0f);
		projectionMatrix.LoadMatrix(viewFrustum.GetProjectionMatrix());
		modelViewMatrix.LoadIdentity();

		// Bind the depth texture
		glBindTexture(GL_TEXTURE_2D, depthTextureId);
		glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE );

		// Move back 1 unit so we can see the quad
		modelViewMatrix.Translate(0, 0, -1);

		// Use the shaders loaded
		glUseProgram(shaderID);
		
		// Load the uniforms for the shaders
		glUniformMatrix4fv(
			locMVP,
			1,
			GL_FALSE,
			transformPipeline.GetModelViewProjectionMatrix());
		glUniform1i(locTexUnit, 0);

		// Draw the quad to render texture to
		batch.Draw();

		// Reset texture parameter and unbind texture
		glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
		glBindTexture(GL_TEXTURE_2D, 0);

	modelViewMatrix.PopMatrix();

	// Do the buffer Swap
	glutSwapBuffers();

	// Tell GLUT to do it again
	glutPostRedisplay();
}


ANYONE?! I’m at a total loss here :frowning:

Again, using a random texture (like from a TGA) works fine so it’s not the quad geometry or texture coordinates.
Not using the FBO (just drawing to screen) also works fine.

Something is wrong with reading from or writing to the FBO and I can’t figure out what.

Couple thoughts.

Use glReadPixels() to read the raw depth values in the depth buffer, and look at that. Verify that they are in-fact all 0’s.

Also, your gl_FragColor = texture(…) is a little ambiguous exactly what it’s doing without more info. This is a depth texture. If you’re using GLSL 1.2 or earlier (looks like you are) then the value that actually is returned for a depth texture lookup is determined by GL_DEPTH_TEXTURE_MODE as follows:

  • INTENSITY = rrrr
  • LUMINANCE = rrr1
  • ALPHA = 000r
  • RED = r001

For GLSL 1.3+, DEPTH_TEXTURE_MODE is deprecated and you always get the LUMINANCE result (rrr1).

So even if your depth buffer isn’t all zeros, a DEPTH_TEXTURE_MODE of ALPHA (with GLSL version <= 1.2) would possibly explain your results.

In any case, what’s usually done is to do a texture(…).r to grab the first component, which (so long as GLSL version is >= 1.3 OR DEPTH_TEXTURE_MODE != ALPHA – the usual case), will give you the value from the texture. Use something like gl_FragColor.rgb = texture(…).rrr to stripe across gl_FragColor.rgb and explicitly set gl_FragColor.a = 1.

Thanks for the reply Dark Photon!

Ok, so I for these debugging purposes, I removed the binding to the FBO (and unbinding) during my draw function so that the depth buffer is being written to.

I also stuck the following code in immediately after the calls to render my geometry (but before I use the depth texture and render the quad)


// Try to read depth buffer
	GLuint* pixels = new GLuint[windowWidth * windowHeight];

	glReadPixels(
		0,
		0,
		windowWidth,
		windowHeight,
		GL_DEPTH_COMPONENT,
		GL_UNSIGNED_INT,
		pixels);

	for ( int i = 0; i < windowHeight; ++i)
	{
		for ( int j = 0; j < windowWidth; ++j )
		{
			unsigned int pixel = pixels[(windowWidth * i) + j];
		}
	}
	delete pixels;


Investigating the value for “pixel” in the loop it is usually non-0 (which makes sense, the scene I’m rendering has mostly visible pixels). Using GLfloats and reading in a float instead of an unsigned byte produces excepted results.

So this tells me that the depth buffer is being written to. Turning back on the frame buffer bind and leaving the glReadPixels back in produces all 0s. Thus, this means (I think) that the depth writes are indeed going to the FBO which is good!

Very interesting! Didn’t know that. I’ve changed the fragment shader to be


uniform sampler2D depthSampler;

in vec2 vTex;

out vec4 color;

void main()
{
  float d = texture2D(depthSampler, vTex).r;

  color = vec4(d, d, d, 1.0);
}

I’m not trying to stick to any particular GLSL version, just following some examples and trying to learn. Even when doing that, everything is still all black. If I change color = vec4(1, 0, 0, 1) everything is red as expected and if I set it to be vec4(0, 1, 0, 1) then it’s green as expected so the shader is ok.

Still sorta at a loss here as it seems that the depth information is not being written to the texture or I’m not reading it properly.

WOHOOOOOOOOOOOOOOOOOOOOO!

I fixed it. Not sure why (I’m sure someone can explain it to me) but I fixed my problem.

Moving the “bind FBO” call ABOVE the clearing of the GL_DEPTH_BUFFER_BIT did the trick.

So do glClear calls only affect bound frame buffers? (This makes sense) If the depth buffer is not cleared for a given FBO (as I was doing previously) why did I also get out 0 from the depth texture instead of some value which represented the largest z value over the lifetime of my application?

I’m glad I got it working but I would like to know why the code wasn’t working previously if anyone can enlighten me.

:slight_smile: Thanks!

Yep. But s/buffers/buffer/. And (for preciseness) s/framebuffer/draw framebuffer/.

AFAIK, you can have one draw framebuffer bound at a time. And that’s the one that glClear should operate on.