GL_MAX_RENDERBUFFER_SIZE

I am doing offscreen tiled rendering of very large output images using FBOs. In order to have the best performance, I am trying to create the largest FBO (and thus tile size) that I can. To that end, I am using the following code:

GLint maxRenderBufferSize;
glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE_EXT,&maxRenderBufferSize);

And then creating an FBO of that size. My OpenGL context comes from an invisible window which I create of an arbitrary size(100x100). The value I get for the maximum size is 8192, thus I create 8192x8192 render buffers.

I get no errors in creating my renderbuffers etc. I just get an “incomplete” status when calling glCheckFramebufferStatusEXT() once I’ve set up the FBO. Note that my code does work for smaller FBOs.

Any ideas on how I should approach this?

Thanks, Sean

Two 4096x4096 buffers instead?
Switching is not a huge penalty, and I have generally found on most HW when you get to the maximum size of texture you tend to lose performance slightly anyway…

884 = 256 MB *2 (depth) = 512MB , maybe VRAM is a bit fragmented and there’s not enough space for the 2 chunks. I’ve seen that happen. Create the FBOs before any shaders and other textures.

Thanks for your answers; I must admit, I wasn’t even thinking about VRAM. It sounds like the number I get back from from GL_MAX_RENDERBUFFER_SIZE is not dynamic (doesn’t consider the current VRAM situation). Is the only viable strategy to try it with one size, then if that fails try it with a smaller one, and so on? I was indeed able to create a 4096x4096 FBO which worked correctly.

Alright, I am a little stumped. I thought I had solved it by checking the available VRAM and not creating anything larger than 1/16’th of it (just a reasonable guess). Now I am seeing, however, that while the operation never fails, I get bands of scan lines in which proper rendering does not occur; usually they are blank, but sometimes I see some pixels here and there of the wrong color. I’ve confirmed this on another machine as well.

Is this just something about which I should contact NVidia? Does it ring any bells for anyone?

The number you get back from GL_MAX_RENDERBUFFER_SIZE is not dynamic. It’s a “best case”, of what the “engine” (usually hardware) supports. Even if you had fictional gfx hardware with 1TB of VRAM supporting 64kx64k renderbuffer, but VRAM was badly fragmented, chances are even 1024x1024 would fail (I don’t think GPU’s have MMU’s… yet :wink: ).

For the banding problem; are the bands on the edges of your FBO size, or are they spaced 8, 16 or 32 pixels y?

Just in case anyone is searching and finds this thread, I ended up submitting the issue I described before with “bands of scan lines in which proper rendering does not occur” to NVidia. It appears to be a bug in their drivers, and I am working with them to narrow it down.