Render to texture using FBO on ATI Radeon 9550

Hi All,

Now I have a perfect working FBO based render to texture (on the developing machine) but can’t see it working on the ATI Radeon 9550 / X1050 mentioned in the subject of this post. Of course all the necessary extensions are present on the second machine and OpenGL version is 2.1.7873.

I attach the code to see if somebody discover something that can be improved.

I wanted also to ask:

  1. if I can expect a bug/missing functionality on the driver for this trivial FBO implementation
  2. if there is a tool to analyze FBOs

Thanks,

Alberto

glBindTexture(GL_TEXTURE_2D, textureName);            
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureSize, textureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
                           
fbo = glGenFramebuffersEXT();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureName, 0);

// Checks FBO status
int status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
ReportFboStatus(status);
                 
// Draws on texture
DrawShadow();


I keep saying this to almost everyone that comes here (see previous posts)

ENSURE, that you specify the third parameter of glTexImage2D as a known type, otherwise the driver will pick a format for you. (On ATI - it has a habit of choosing a 16bit texture format - which is probably not a good render target)

So change your code to:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, textureSize, textureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, null);

If that does not work, I don’t know what is wrong…

sqrt[-1],

Unfortunately it does not help.

I was wondering if the choice to use:

gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

and not generating mipmaps can someway be a “non-standard” approach because the result is exacly like the one we obtained when forgot to use the gl.TexParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR) call.

Thanks,

Alberto

This is not standard
fbo = glGenFramebuffersEXT();

It should be
glGenFramebuffersEXT(1, &fb);

Just try the example from which will work on every driver and GPU
http://www.opengl.org/wiki/index.php/GL_EXT_framebuffer_object

Yes V-man,

It simply calls the following:

public static uint GenFramebuffersEXT()
{
   uint[] result = new uint[1];
   glGenFramebuffersEXT(1, result);
   return result[0];
}

Thanks,

Alberto

This is one of the many reasons I abandoned that generation of ATi cards. RTT on R9600 never worked (be it opengl or dx9c), shaders were reported to be version 2.0a but even 2.0 did not work without crashes

Thanks Ilian,

This is a good news for me, I was getting crazy aroung this issue.

Thanks again,

Alberto

I don’t know what to say except I don’t seem to have any troubles with FBO on ATI 9550 (last tested around January when I did my Light Indexed demo)

Perhaps have a look at the Humus framework and see if he does something you don’t? http://www.humus.name/ (I used his framework code)

Thanks for the info, sqrt[-1].
I had almost given-up FBO in some software of mine, as R9550 is a minimum target. I hadn’t tested new drivers for it in almost 2 years, as it’s been collecting dust in discrete form.

sqrt[-1],

We are talking about RTT (render to texture) using FBO not FBO offscreen rendering.

Where you using RTT with Humus?

Thanks,

Alberto

Ilian,

What does it mean? You gave up FBO? and what are you using to get bitmap from OpenGL scenes?

Thanks,

Alberto

devdept

What is the point of FBO offscreen rendering? Unless you somehow do a CPU read back (via ReadPixels) or copy it into a texture (CopyTex) you are going to have to use a texture to see the result of your rendering.

Anyway, I use FBO textures in my deferred rendering - I render light indices into a RGBA8 surface and look it up as a texture in a later pass.

See the demo and full source here:
http://code.google.com/p/lightindexed-deferredrender/

I create the render buffer on this line -
if ((lightIndexBuffer = renderer->addRenderTarget(width, height, FORMAT_RGBA8, pointClamp)) == TEXTURE_NONE) return false;

Search for usage of lightIndexBuffer and step through the code.

devdept, I simply had decided to not use RTT initially. That app is 2D GUI, and I could easily get away with drawing the complex (necessary to cache) stuff with a fast asm software rasterizer of mine, and display that stuff via GDI DIBs.

Uhm guys, it seems like half the time you refer to PBuffers as offscreen FBO. If that’s the case, let’s get the terminology right :).

sqrt[-1],

I gave a look to the code, it is using some different parameters for FBO like GL_DEPTH_COMPONENT24, etc. while we are using only color buffer.

At this point I think that it is a bug in the driver with only the parameters we are using only becuase the sample is working fine on the machine where our code fails.

Thanks so much again,

Alberto

Ilian,

I thought PBuffers were something completely different from FBO, please send me a couple of links to improve my knowledge.

Thanks,

Alberto

Ah! This may be the problem! I recall vaguely that some hardware or ATi or maybe even the FBO spec required a depth texture. And some posts here that whether you specify gl_depth_component, gl_depth_component16 or gl_depth_component24 did matter whether the FBO will work/become_complete. There’s also an ATi requirement the depth be the first attached object to a FBO.
Though my brain is fuzzy these days, so don’t rely much on my recollections.

PBuffer != FBO != PBO . “Offscreen FBO” is wtf… FBOs are all offscreen afaik… as they’re the real RTT solution. So, I thought maybe the terminology in this thread isn’t right.

I believe it is the opposite. You need to attach texture first.
Ogre had the same problem on ATI until they changed their code.
That’s why I gave the example of the Wiki, because it always works
http://www.opengl.org/wiki/index.php/GL_EXT_framebuffer_object

I thought PBuffers were something completely different from FBO, please send me a couple of links to improve my knowledge.

You would have to be insane to use p-buffers. That extension should never have been created in the first place.

That extension should never have been created in the first place.

No, PBuffers is a perfectly valid extension. Basically, it’s about creating a GL context that doesn’t render to the screen. Even using FBOs still requires that you have a real HWND, while PBuffers doesn’t. The problem is not with the pbuffer extension, but with the corresponding WGL_ARB_render_texture extension, which is equal parts useless and stupid.

Perhaps if you are not attaching a depth buffer, try setting glDisable(GL_DEPTH_TEST) just before you call to glCheckFramebufferStatus?

Edit: also add a glDepthMask(GL_FALSE) in there as well

This code doesn’t work on the machine with ATI Radeon 9550, it always returns a GL_INVALID_VALUE.

Maybe this super recent driver has some bugs (ATI Technologies Inc. 8.522.0.0 dated 7/31/2008), to make it work I need to remove the FBO depth buffer lines.

I’m quite confused.

Even disabling GL_DEPTH test doesn’t help.

Thanks so much to all of you for your help, I will give up strugglying with this machine for a while.

Alberto