what's wrong with this FBO generation

Hi,
I’ve been trying to render the depth buffer to a texture (starting a shadow map implementation), but I keep getting either the unsupported state when creating the FBO. I’m new to this, I’ve tried following http://www.songho.ca/opengl/gl_fbo.html which has a depth-buffer-only solution. I’ve also look at the wiki here and some other resources, but I guess I’m just too confused by it all.

I need my code to be OGL 3.0 compatible, using non-fixed pipeline functionality only. I’m running a nvidia 9300M GS card on WinXP. I have a 3.0 context (3.2 should be supported, had some trouble getting it and settled for 3.0 in the meantime). I use GLEW for extensions.

Any help is really appreciated! Or a working snippet (guaranteed it for OGL 3.0 and newer)


// I requested a 32 bit z-buffer and seemed to have gotten it without trouble
// tried different buffer depths with different texture formats, too, but there
// might be a combination that I've missed

glGenTextures(1, &m_shadowMap);
glBindTexture(GL_TEXTURE_2D, m_shadowMap);

glTexImage2D(
	GL_TEXTURE_2D,
	0,	// mip map levels
	GL_DEPTH_COMPONENT32F, // internal format, have tried GL_DEPTH_COMPONENT (no error), too
	shadowWidth,
	shadowHeight,
	0, //border
	GL_DEPTH_COMPONENT, // format
	GL_FLOAT, // type
	NULL);	// data

// no error, can create this just fine
Core::CheckError("texture creation");

///////////////////////////////
// second, generate framebuffer
glGenFramebuffers(1, &m_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);

Core::CheckError("framebuffer object");

glGenRenderbuffers(1, &m_renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, m_renderBuffer);

Core::CheckError("renderbuffer");

// attachment/storage/whatever
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, shadowWidth, shadowHeight);
glBindRenderbuffer(GL_RENDERBUFFER, 0); // don't really know why I do that, but not having it doesn't help

glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_renderBuffer);

Core::CheckError("attachment");

// attach texture to the framebuffer?
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_shadowMap, 0);

// we don't want to render the color buffer
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);

// returns 0x00008cdd, which the spec says is FRAMEBUFFER_UNSUPPORTED_EXT
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

Hmmm. Why are you creating and attaching a renderbuffer for DEPTH, then turning around and trying to replace it with a texture? Get rid of the renderbuffer stuff.

My guess is you need this on your texture:

glTexParameteri( gl_target, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameteri( gl_target, GL_TEXTURE_MIN_FILTER, GL_NEAREST );

In the olden days I remember once that being required. Maybe still is (note that the default MIN_FILTER uses MIPmapping and presumes MIPmaps are present). From this:

I gather it still might be necessary. Apparently on NVidia, an incomplete texture (e.g. MIPmapping filtering enabled with no MIPmaps) causes an unsupported FBO.

Also, try this with just a plain-old GL_DEPTH_COMPONENT24. Get that working before you flip to GL_DEPTH_COMPONENT32F.

Also, try disabling color writes, stencil writes, alpha test, stencil test.

glTexParameteri( gl_target, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameteri( gl_target, GL_TEXTURE_MIN_FILTER, GL_NEAREST );

thanks Dark Photon, that was the trick; also, thanks for pointing out the thing with the renderbuffer! The working version, just in case anybody was interested, is


glGenTextures(1, &m_shadowMap);
glBindTexture(GL_TEXTURE_2D, m_shadowMap);

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	
glTexImage2D(
	GL_TEXTURE_2D,
	0,	// mip map levels
	GL_DEPTH_COMPONENT24,//32F, // internal format
	shadowWidth,
	shadowHeight,
	0, //border
	GL_DEPTH_COMPONENT, // format
	GL_FLOAT, // type
	NULL);	// data

glBindTexture(GL_TEXTURE_2D, 0);

glGenFramebuffers(1, &m_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_shadowMap, 0);

glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);

GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

I have another worry, though… the texture I get for the background seems to have some issues. When I get it, it only contains values that are between ~0.95 and 1.0 ; When I convert that range to 0.0 - 1.0, one can make out scene (it seems like a correct “depth-shaded” rendition of the scene I’m using). I’ve tried playin around with the near and far planes, but even having near = 10 and far = 300 doesn’t help. Is that normal? The calculator at http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html gives quite different values, although I suspect that this might be driver/implementation/format dependent. I currently use a 24 bit z-buffer and a GL_DEPTH_COMPONENT24 (GL_DEPTH_COMPONENT32F given NaN for all texture items; using GL_DEPTH_COMPONENT24 and GL_UNSIGNED_INT instead of GL_FLOAT gives a range fron 1064844310 to 1065353216 = 508906; for the same setting with a 32 bit z-buffer and GL_DEPTH_COMPONENT32, the situation is quite similar).

Also (OT - sry), since I’m taking a shadow map from the light position and projecting it onto the scene, to make sure the values in both the z-buffers are in the same range, I should have near/far distances set to the same values for both the shadow map generation frustum as well as for the scene rendering frustum, right (just a quick thought, obvisouly not that far yet)?

Perfectly normal. With the normal perspective transformation and a standard fixed-point Z buffer (DEPTH_COMPONENT24/32 FBO or system FB), most of your used precision is clustered near the near clip plane (0.0), falling off as 1/z as you move away toward the far plane. For more detail, see this.

The same will be true for a floating-point depth buffer (DEPTH_COMPONENT32F), but this isn’t always ideal, particularly for shadow mapping. Clustering your precision around the near clip isn’t always what you want. So there are various things folks do to change the distribution of depths in the depth buffer. Here are a few:

The latter actually describes what you’re seeing.

Also (OT - sry), since I’m taking a shadow map from the light position and projecting it onto the scene, to make sure the values in both the z-buffers are in the same range, I should have near/far distances set to the same values for both the shadow map generation frustum as well as for the scene rendering frustum, right (just a quick thought, obvisouly not that far yet)?

No, not at all. You have different projection matrices for your camera view and light view (shadow map view). These can have totally different near/far clip plane distances – even be totally different projection types (which is what you’d do for a perspective camera and shadows from a directional light source [orthographic projection]).

First of all, thanks for all the effort.

I managed to do a (non-functioning, although, as I believe, theoretically sound - took somebody else’s cookbook, read it, understood it, implemented it - shadow mapping algorithm) before your reply working on the assumption that the values I get from the depth buffer are correct. Now, I’m trying to see what might be wrong with it.

As for the depth buffer stuff, I wasn’t quite able to read up all of it yet (although I plan to read what I can tomorrow and eventually follow up on that), but some points I’d like to make in case you can reply before I start working on it:

from the post you linked to

By the way, one kind of interesting thing to try is to plug in z_eye = -(n+f)/2 (the plane half-way between the near and far clip planes) into the above equation. With a little algebra, you can see that you get z_ndc = (f-n)/(f+n) out of it.

For the sake of argument, I made near = 10, far = 200 for the light projection matrix, which comes up with the mid-plane of my depth buffer being at z_ndc = (200 - 10) / (200 + 10) = 190 / 210 = ~0.9047. Now, my scene is 30x30 boxes on a grid, each of the boxes being about 40 units (including gaps), the grid centered around the origin, I can clearly see (on my depth buffer visualisation) tens of the boxes being cut off by the far plane, and there are boxes cut in half at the far plane, eg. there is geometry at the far plane. Despite that, the values I read from the depth buffer are between 0.9845 (far) and 1.0 (near) - I walk thru them to find the min/max. So the expected value for mid-plane is outside. Also, the how to visualize depth buffer page shows a “non-linearized” depth buffer, which, even though a little washed out, is there, while for me, trying to visualize it gives a white screen (intensity * 255 and save; I get things visible when I remap from the depth_min to depth_max range to 0-255 and save). Can attach pictures/code/whatever if needed. I am using a perspective projection.

No, not at all. You have different projection matrices for your camera view and light view (shadow map view). These can have totally different near/far clip plane distances – even be totally different projection types (which is what you’d do for a perspective camera and shadows from a directional light source [orthographic projection]).

Ok, am I then right to say that this works because since I transform the/a vertex from camera space to light’s camera space and then project it using the light’s projection matrix (for example the ortographic proj. for a direction light), I get values in the same space? ie., it doesn’t really matter whether the above is bad or not, since I’d get the same case of the eventual problem for both projections…?

any hints on debugging the shadow maps? Obviously, trying to plot depth values didn’t work (can’t really see ANYTHING), almost everything (bar a small strip that I’d say is already outside of the light’s frustum), trying to mark fragments with the projected value having texture coordinates between 0 and 1 in both directions seems to get weird results (maybe that’s the problem).

thanks for the help, really; sorry for the length.

No, assuming use of a standard perspective projection matrix and the default glDepthRange of 0…1, near screen z = 0.0, far screen z = 1.0. You’ve got that flopped in your description. So the fact that you are seeing 1) 1.0 values and 2) geometry is visible in the light frustum chopped by the far plane is consistent.

To get some 0.0 depth values in your light frustum, move your light source down to where it is down very near to the plane of this planar grid you’re rendering, with a light cone direction tangent to the grid plane. Now assuming you render the grid into the shadow map, you should guaranteed have geometry at the near plane. So, you should now see 0.0 values in your depth buffer.

[quote]You have different projection matrices for your camera view and light view (shadow map view). These can have totally different near/far clip plane distances

Ok, am I then right to say that this works because since I transform the/a vertex from camera space to light’s camera space and then project it using the light’s projection matrix[/QUOTE]
Correct. The only thing I’d clarify about your question is that you don’t necessarily need to go through camera space (let’s call this the camera’s eye-space) when projecting to the light’s clip space. You could do:

obj space -> world space -> light’s eye space -> light’s clip space

The important point being that when you’re going to do a shadow map lookup (light space), you use the light’s projection matrix. And when you’re going to render the scene from the camera’s perspective, you use the camera’s projection matrix.

any hints on debugging the shadow maps?

Try the above, to make sure you get 0.0 values in that depth buffer. Note that those should correspond to light eye-space Z values of -10, per your specs.

Obviously, trying to plot depth values didn’t work (can’t really see ANYTHING), almost everything (bar a small strip that I’d say is already outside of the light’s frustum),

Yeah, that’s weird. Seems something is wrong in the math. I’d draw a plane into your shadow map in the light’s frustum such that it is cut off by the near plane AND cut off by the far plane (maybe sloping away and up in the view), so you can look at the depth returns and apply some common sense to what’s wrong. The depth values “should” run from 0…1 as you go from near->far.

You’re using a perspective light projection matrix, which implies you’re casting shadows from a point light source. You might verify that you are doing a projective texture lookup into your shadow map. E.g. shadow2DProj (pre-GLSL 1.3) or textureProj (GLSL 1.3+). You could do the perspective divide yourself, but might as well use GPU built-ins when possible.

thanks for the additional hints :)!

You might verify that you are doing a projective texture lookup into your shadow map

Erm, still not really strong on the perspective divide thing (what purpose it serves aside from converting from homogeneous coords (making w = 1, if I’m not completely off); due to time constraints, I just accepted it should be like that for now), so my question is - when using a directional light source, and thereby an ortographic projection for the light, does my shadow map lookup code change somehow (I’d assume so since you mentioned it, I personally would say it shouldn’t - I still get the coord in light camera space, and what the light projection does then it its business). A term to google/link would be perfect!

Also, how’d I go about detecting being behind the edge of the shadow map texture? I’ve read here that checking for ShadowCoord.w > 0.0 should do the trick, doesn’t seem to work for me, can’t see why pure clamping should work (shadow on edge, shadow behind the edge?), and am kind of worried about having an if condition (check both shadow map coordinates to see if they’re between 0 and 1) for that. Could probably also setup a border for the texture (“no shadow”, or “far plane”) and clamp.

Other than that, my shadow mapping seems to work, if anyone would like a demo app (to have something working to compare against, for example), let me know and I might get to that in a few days (I believe I’ve seen some around here and the link I gave has an implementation, although it’s not “3.2 core” (which I hope mine is) and doesn’t use glPolygonOffset, which I’m told is the better way to get rid of artifacts than the page suggests.)

That’s it. You got it. Nothing complicated. Projecting 4D coords into 3D.

One of the reasons the perspective divide is separate from the perspective projection transform is that you want to clip “after” projection “before” the perspective divide, because the latter can cause a nasty math blow-up (divide by zero).

Also, you need to defer the divide-by-w to the fragment to get perspective-correct interpolation. The projection transform application can happen in the vertex shader and be interpolated.

my question is - when using a directional light source, and thereby an ortographic projection for the light, does my shadow map lookup code change somehow (I’d assume so since you mentioned it, I personally would say it shouldn’t

It doesn’t have to change, but it can, for efficiency. Why? Well, with positional light source (perspective projection), we have to do that “divide by w” on the projected shadow map texture coordinate before we can do the lookup (thus texture2DProj vs. texture2D, or doing the divide-by-w manually in the shader). But with a directional light source (orthographic projection), the divide by w is pointless – orthographic projection doesn’t perturb w (to check that, see the last line of the orthographic projection at the bottom of this page and note that it is 0,0,0,1). So while you “can” do the divide-by-w for a directional light source, it’s a waste of cycles.

Let’s see. The guy’s doing shadows from a positional light source, which implies perspective shadow projection. For such projections, clip_space.w = -1/eye_space.z. This means it’s going to be positive “in front of” the eye (the light in this context) and negative “behind” the eye (the light). So I think what he’s doing here is preventing objects “behind” the light source from receiving shadows cast by objects “in front of” the light source.

should do the trick, doesn’t seem to work for me

I think you said you’re using a directional light source, which doesn’t use perspective projection, and thus doesn’t have this clip_space.w value, and thus this trick doesn’t work. What’s he really doing with this trick? Looking at the sign of the light eye-space Z to determine whether to apply the shadow map. You could just do that. But can you really get behind a directional light source? That’s kinda strange.

can’t see why pure clamping should work (shadow on edge, shadow behind the edge?)

Well, if you’re behind the far-Z of the light-space frustum, CLAMP_TO_EDGE is probably just fine. I mean, if the point on the far plane is in shadow, what is the point 5 cm “beyond” the far plane? Yep, in shadow.

And if you can guarantee that no points on the near plane are in shadow, CLAMP_TO_EDGE is probably what you want there too. If a point on the near plane isn’t in shadow, then a point 5 cm closer to the light will also read not in shadow, which is what you want.

Where it gets a little ugly is if you allow points on the shadow map near plane to be in-shadow. Then clamping points closer to the light to the near plane isn’t necessarily a good thing.

In that case, you can just look at your clip coordinates and discard the shadow map lookup when you are closer to the light than the near plane.

I am kind of worried about having an if condition (check both shadow map coordinates to see if they’re between 0 and 1) for that.

If your if conditions are coherent (that is generally evaluate to the same answer for most small clusters of pixels – say 2x2 block of pixels), then they’re generally pretty cheap. And often can be a good deal when you’re using them to avoid memory accesses like this case. So bench and see!

Could probably also setup a border for the texture (“no shadow”, or “far plane”) and clamp.
Perhaps, or CLAMP_TO_EDGE, which is what you want for the beyond-the-light-far-plane clamping anyway. See above.

[quote]Other than that, my shadow mapping seems to work, if anyone would like a demo app (to have something working to compare against, for example), let me know and I might get to that in a few days

Sure thing! It’d be great to have more working GLSL shadow map examples to point folks to.