FBOs: depth works but not color

Hi all, besides wishing I was at SIGGRAPH right now, I’m working on a project where I’m rendering things to a framebuffer object. Currently I’m getting depths to show up, but not color.

Is there anything in particular that might cause this to happen? The color texture is a regular old GL_TEXTURE_2D with type GL_RGB32F_ARB. Of course the depth texture is GL_TEXTURE_2D and type GL_DEPTH_COMPONENT.

I’m attaching both of them to the framebuffer with:
glFramebufferTexture(GL_FRAMEBUFFER, attachmentType, texObj->getTexID(), 0);

Where attachmentType is GL_COLOR_ATTACHMENT0 for the color texture and GL_DEPTH_ATTACHMENT for the depth texture.

When I put those textures on a quad to view them, the color texture is always just black and the depth texture always shows up perfectly. I get no OpenGL errors. I’ve tried adding and removing the “EXT” from the end of the functions and constants, but that doesn’t help. I’ve tried other texture types such as GL_RGBA8, GL_RGB32F, and GL_RGB16F_ARB.

I’m running Windows 7, OpenGL 3.3, GLSL 3.3, and a nVidia GeForce 8600M GT video card. This exact same code worked just fine on a newer ATI card, but not on this slightly older (but only by a couple years) nVidia card.

Have you tried debugging the texture? What values are you writing to it?

Have you used glDrawBuffer to select the colour attachment when you bind the framebuffer?

I’m doing multiple things actually. One framebuffer has a shader to write vertex position values to the texture. In another pass of the scene, another framebuffer uses a shader that writes out normal values.

I have tried using no shader or just a shader that only passes through values from openGL and does nothing else (i.e. a fragment shader with one line, gl_FragColor = gl_Color;).

For reference, I’m creating some of the textures like this:


glGenTextures(1, &texID);
glBindTexture(target, texID);
glTexParameteri(target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F_ARB, 1024, 1024, 0, GL_RGB, GL_FLOAT, 0);
glTexParameteri(target, GL_GENERATE_MIPMAP, GL_TRUE);
glBindTexture(target, 0);

** I just figured this problem out! I noticed as I was copying and pasting the code above that the GL_GENERATE_MIPMAP line was AFTER the glTexImage2D line. I moved it up to before it and now everything works. That order of operations caused no problems on the ATI card or drivers, but the nVidia card or drivers don’t like it. Wow.

You shouldn’t be using GL_GENERATE_MIPMAP with rendertargets anyway. There’s a reason glGenerateMipmaps was introduced in the EXT_FBO specification.

Alfonse, I should? I’ve tried using glGenerateMipmap, and it just really slows everything to a crawl because it has to be run on the texture every frame. Or am I doing it wrong? What’s the reason to use glGenerateMipmap?

because it has to be run on the texture every frame.

What do you think GL_GENERATE_MIPMAP is doing? The difference is that glGenerateMipmap will only do it specifically when you ask it to, while GL_GENERATE_MIPMAP will do it at an arbitrary time decided on by the driver. It may even generate mipmaps after every draw call to a texture with that set on it.

Thank you Alfonse, of course it seems obvious to me now. I think you can chalk it up to the tunnel vision of trying to troubleshoot the same few blocks of code for hours on end and missing the small and obvious stuff. It should be called “Coder Tunnel Syndrome”.