FBO + MRT + large float textures = error?

I’m experiencing some weird behavior in a program I’m developing, and I’m hoping someone here can clue me in. I have a GPGPU program (in OpenGL w/ GLSL) that works on 9 floating point values per pixel stored across three RGBA32 textures, and it has to ping-pong between them, so there’s 6 textures total. Each pass I attach 3 of them to an FBO and bind the other 3 as textures, activate my shader, and draw a quad, writing to the 3 attached textures with MRT in the shader. Rinse, repeat. This works fine with textures of 1024x1024, and is quite speedy on a GeForceFX 8800GTX (with 768Mb RAM, driver 163.71 on Windows XP). However, when I bump up to 2048x2048, it fails. Specifically, the first pass works fine, and then when the second pass gets to attaching textures to the FBO, the third attachment ends up causing a FRAMEBUFFER_UNSUPPORTED error.

I’m suspicious it’s a shortage of memory somehow, but that doesn’t make too much sense - 6 RGBA32 textures at 1024x1024 is roughly 100Mb, and at 2048x2048 should be 400Mb, which is only half of what I have available, so it really should be okay. Another thought I had was that maybe the problem was that the 2048x2048 texture was larger than my window (1024x1024), but when i run with a 1024x1024 texture in a 512x512 window, it’s no problem.

Any thoughts on what’s going on? Or how I might go about debugging this? Thanks!

-stephen diverdi
-stephen.diverdi@gmail.com

6 RGBA32 textures at 1024x1024 is roughly 100Mb, and at 2048x2048 should be 400Mb, which is only half of what I have available,

In order to render, all render targets and all textures must be in video memory simultaneously. So, you will need a graphics card that has 400+MB of video memory to pull off what you’re attempting. They do exist, but they’re pretty expensive.

Korval, read the post carefully. His card has 768Mb memory.

IMHO, there may be a limitation on maximal memory occupied by rendertargets. I don’t see any other explanation.

Zengar, I agree, but I’m not sure how I can go about testing that hypothesis. I’m going to make some more noise over on the NVIDIA forums to see if I can get an official word, but in the meantime…

Thanks for the replies!

Hmm, looks like I may have answered my own question - installing the NVIDIA PerfKit installed an older (instrumented) driver, version 163.16, and now it will run with 2048x2048 textures.

Not an error but i got a performance problem on my ati X1950Pro.

I am using two 4channel fp32 textures.

glDrawBuffersARB(2, buffers); 30fps
while.
glDrawBuffersARB(1, buffers); 250fps

i got a performance problem on my ati X1950Pro.

What, did you think rendering to 2 128-bit floating point textures would be cheap? You probably killed your memory bandwidth.

That would be the reason if i used only large res. textures like 1024x1024, 2048x2048, or higher no?
I get same fps with 256x256 textures, also tried fp16 and other types using 256x256 textures, results exactly same.

I get same fps with 256x256 textures, also tried fp16 and other types using 256x256 textures, results exactly same.

That seems… unlikely. Even with a driver bug, you should see a higher framerate for the single buffer case using fp16 or 32-bit integer render targets than with fp32.

Are you sure you’re detecting the framerate correctly?

Yes, i am quite sure about that.

Though, i didn’t mean single buffer results, compared only multi buffer results, using 256x256 resolutions with all types, 16/32 float/integer.
Don’t have any performance problems when i render to a single texture, results are different when i switch to different types, as it should be.

“The shading language preprocessor #define GL_ARB_draw_buffers will
be defined to 1, if the GL_ARB_draw_buffers extension is supported.”

#extension GL_ARB_draw_buffers : enable
gives a warning[edit] “WARNING: 0:3: extension ‘GL_ARB_draw_buffers’ is not supported” which is not.

What am i doing wrong?