Supported FBO Texture Internal Formats

Is there an easy way to determine what internal formats you are allowed to use in FBOs color attachment textures?

I read over the spec and it says that if you claim to support FBOs you have to support at least 1 format but I’ve tried several of the obvious ones and have had no luck.

1,2,3,4,GL_RGBA,GL_RGBA2,GL_RGBA4,GL_RGBA8,GL_RBGA12,GL_RGBA_16,GL_RBG, GL_RGB8, all return GL_FRAMEBUFFER_UNSUPPORTED_EXT.

The spec seemed to indicate that that value is only used to represent unsupported internal formats. I’m not too picky, I can use just about any value with what I am working on, but I can’t seem to find one that works.

Is there anything I could have done earlier in the program that would preclude internal formats from working?

I’ve got an NVIDIA 8600, so it’s a modern card. I’m using glew to handle all my extensions and it’s not crashing on function calls so I assume I initialized it right.

Anyone happen to know a way to check for working internal formats, or maybe if there is a way to find out what internal formats are supported on what cards somewhere?

How are your sure that your problem is about the internal format? How do you use fbos?

rgba8 ist supported on your hardware, obviously your framebuffer setup code is wrong

The spec made it sound like FRAMEBUFFER_UNSUPPORTED_EXT was only used for a bad internal format, so that’s how I came to that conclusion.

An example of how I am creating my FBO is:

GLuint handle;
glGenFramebuffersEXT(1, &handle);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, handle);

GLuint depthBufferHandle;
glGenRenderbuffersEXT(1, &depthBufferHandle);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthBufferHandle);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depthBufferHandle);

GLuint textureHandle;
glGenTextures(1, &textureHandle);
glBindTexture(GL_TEXTURE_2D, textureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureHandle, 0);

GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
switch(status)
{
case GL_FRAMEBUFFER_COMPLETE_EXT:
std::cout << “Framebuffer creation successful!” << std::endl;
break;
case GL_FRAMEBUFFER_UNSUPPORTED_EXT:
std::cout << “Framebuffer unsupported!” << std::endl;
break;
default:
std::cout << “Framebuffer other!” << std::endl;
}

And I get GL_FRAMEBUFFER_UNSUPPORTED_EXT for all of the internal formats listed above.

As far as I can tell I’m doing everything like you are supposed to.

One thing I did think of is I’m using height and width of 16. This works for other textures for me. Is it possible that the FBOs may have a higher minimum height and width or something?

You’re using

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureHandles[i], 0);

instead of

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureHandle, 0);

but maybe you changed the code a bit to post it here and forgot about that one.

Does it work without attaching a depth buffer?

yeah, I just forgot to change that one, I’ll try it without the depth buffer

Same results without the depth buffer.

What about larger textures?

16x16 same results.
64x64 same results.
512x512 same results.
1024x1024 same results.

So, I don’t think it’s the size.

Generate mipmaps


glGenTextures(1, &textureHandle);
glBindTexture(GL_TEXTURE_2D, textureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenerateMipmapsEXT(GL_TEXTURE_2D);

Mipmaps worked. 14 of the 16 internal formats I tried created successfully. GL_RGBA2, and GL_RGBA4 didn’t.

Thanks for your help, now I can get back to work.

Why are mipmaps necessary?
Won’t that make the mem footprint of every fbo about twice as big?

Why are mipmaps necessary?

Because he didn’t set his texture parameters correctly. The default parameters use mipmapping, so if he doesn’t define mipmaps, then it’s an incomplete texture. And some drivers don’t like incomplete textures with FBOs.

Won’t that make the mem footprint of every fbo about twice as big?

No. FBOs are simple state objects; they don’t take up any significant memory. Mipmaps for a texture only take up about 33% of the base mipmap layer’s size.