Layered Framebuffers:glClear not working correctly

I have been trying to get a 360º renderer running and after what seem hours of debugging trying to find the mistake I have come to the conclusion that I may have hit a driver bug.

The problem:

When using layered framebuffers with GL 3.3 AMD Radeon HD drivers the glClear (or glClearBuffers) function does not work correctly. It only cleares the layer 0 of the full layered texture object.

The minimalistic test I’ve performed does this:

  1. Create a layered texture object. I’ve tested 3D textures, 2D texture arrays and cubemaps, all with the same results.

  2. Create a frame buffer and attach the texture object to the color attachment 0.

glCheckFramebufferStatus returns GL_FRAMEBUFFER_COMPLETE.

When rendering, if I use glClear only the layer0 face is cleared. However, I am able to draw to all layers. The output operations of the geometry shader appear correctly on the layers and I can visualize this output wether it’s a cubemap, 3D tex or 2D tex array, it’s all the same.

I’ve tried everything I could think of, but the GL 3.3 specification is very clear about how glClear should work on layered textures: it simply says all layers will be cleared.

Have I hit a driver bug or am I missing some obscure way to clear all the layers? This basicly destroys any use of layer rendering.

Current video driver:

Driver Packaging Version 8.892-110914m-125030C-ATI
Catalyst Version 11.9
Provider ATI Technologies Inc.
2D Driver Version 8.01.01.1186
Direct3D Version 7.14.10.0860
OpenGL Version 6.14.10.11079
AMD VISION Engine Control Center Version 2011.0908.1355.23115
AMD Audio Driver Version 7.12.0.7702

I’m about to go check older or beta drivers if I can find them, but I figured I’d drop this problem here first because testing drivers is going to take a while.

Thank you in advance.

And a follow up:

So I managed to get it to clear the layers correctly, however this is a definitive case of weird.

I grabbed another example code I found and copy pasted the code into my project then adapted it a little. To my surprisse, it worked with a 2D Array and was clearing all layers as expected.

So I started cutting lines of code to trim down the main difference, until I finaly spotted it:

glBindFramebufferEXT

vs

glBindFramebuffer

I am using GLEW to initialize my extensions and it seems to initialize both of these function entry points. For some reason, the code works when using glBindFramebufferEXT but not the glBindFramebuffer named one.

From what I understood I should be getting the same functionality from both, should I not? However, checking the pointers during runtime they are actualy pointing to different addresses.

Clearly I’m missing something and it’s probably related to the extension mechanism via GLEW, but I cannot find any sources hinting to this problem I could have.

My understanding was that if glBindFramebuffer was available then glBindFramebufferEXT would be redundant. However, I suposse I could consider that glBindFramebufferEXT is giving me functionality not supported currently by glBindFramebuffer, but I’m asking for Core 3.3 GL context and I understood that this context supported layared framebuffers without extensions.

What am I missing here?

What am I missing here?

That this is a driver bug. By definition, it is something that shouldn’t be happening. A driver bug does not have to follow any rhyme or reason; it simply is. There’s no explanation for it besides, “someone at AMD screwed up.”

However, checking the pointers during runtime they are actualy pointing to different addresses.

And where in either specification does it say that they would be the same function? By all rights, the driver can (and should) fail if you use glGenFramebuffer to create an FBO, but use glBindFramebufferEXT to bind it.

Thank you for the response.

It was my conclusion after a lot of tinkering, but since I’m still learning the ropes of 3.3+ I was starting to doubt that I was interpreting the information correctly. Hopefuly I won’t come to hit other bugs like this, hah!

And where in either specification does it say that they would be the same function? By all rights, the driver can (and should) fail if you use glGenFramebuffer to create an FBO, but use glBindFramebufferEXT to bind it.

As for your second point, you are right it’s not something strange. I wasn’t trying to imply it should be the same function ofcourse, was just pointing out GLEW was indeed binding different entry points and not just redefining the function name. I’m trully not very familar with GLEW, I just know it works and it saves me the hassle of creating the entry points myself, which is perfect for me right now.

So I’ll shrug it off as a driver bug. It really gets even more weird, actualy. Here’s more details:

I’ve been able to combine a texture cube map attached as a depth buffer with a texture 2D array as a color attachment and achieving correct results (glClear works, depth buffer is wiped when asked, etc), however using a cube map for the color attachment falls back to the odd behaviour of glClear not wiping any other layer but 0 (yet still being able to write to all layers).

Two texture 2D arrays seem to work however. Luckily the 2D arrays are more useful for me for the current application, but go figure really.

Actually, this scenario constitutes pretty nasty edge case.
This is because glBindFramebufferEXT doesn’t require its name to be ‘genned’ and accordng to specification, glGenFramebuffers marks the names as used for its own purposes only.

Not even driver writers care though. Last time i checked neither ATI nor NVIDIA cared what was stuffed to glBindFramebuffer or glBindFramebufferEXT.

Aparently the newer (currently marked as “11.10 preview”) drivers fix the bug (at least with 2D arrays). I was able to test the application on another AMD machine with 11.9 catalyst and it failed to clear the layers. Once upgraded to the newer drivers, it worked inmediately. Direct cause, seems.

Thanks for the support.