FBO on ATI (again...)

Hi folks,

I am really struggling with FBO’s on various ATI cards. Currently I am testing on a laptop with ATI X2300HD.

I am actually having a lot of different problems. One biggie is that it crashes immediately if I try to call glGenerateMipmapsEXT() on it. This has been the topic of lots of discussions here, none of the solutions posted have worked for me. Anyway, I have just given up on that at the moment and disabled mipmaps for FBO cards.

The issue this time is that I use FBO’s for render to texture, and some of the resulting textures become garbled (not all though…). This works perfectly on NVIDIA hardware on different platforms (we have tested on Linux, Windows and Mac), but not on ATI.

An example of the result can be seen here:

http://www.worldbeside.com/screenshots/fbo-error.jpg

Here is the FBO setup code:



 // create objects
 glGenFramebuffersEXT(1, &fb_);        // Frame buffer object
 glGenTextures(1, &handle_);           // Texture handle
 if(z_depth_ > 0)
    glGenRenderbuffersEXT(1, &depth_rb_); // Render buffer for z buffering
 
 Btk::logDebug("GngTools") << "GngFBOTexture::initBuffers(): handle = " << handle_ << Btk::endl;
 
 // Make frame buffer active
 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb_);
 
 // Mipmap generation for FBO's on ATI just doesn't work :-(
 if(GngGL::getVendor() == GngGL::VENDOR_ATI)
    {
    if(min_filter_ != GL_LINEAR && min_filter_ != GL_NEAREST)
       min_filter_ = GL_LINEAR; 
    if(mag_filter_ != GL_LINEAR && mag_filter_ != GL_NEAREST)
       mag_filter_ = GL_LINEAR; 
    }
    
 // Initialize texture
 glBindTexture(GL_TEXTURE_2D, handle_);
 
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrap_s_);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrap_s_);
 
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, mag_filter_);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, min_filter_);
 
 if(min_filter_ != GL_NEAREST &&
    min_filter_ != GL_LINEAR)
    {
    int mm_level = MtkTools::ilog2(std::min(width_, height_));
    Btk::logDebug("GngTools") << "GngFBOTexture: Max mipmap level = " << mm_level << ", size = " << width_ << " x " << height_ << Btk::endl;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, mm_level);
    }
    
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width_, height_, 0, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*)NULL);

 // Establish a mipmap chain
 if(min_filter_ != GL_NEAREST &&
    min_filter_ != GL_LINEAR)
    {
    Btk::logDebug("GngTools") << "GngFBOTexture: Enable automatic Mipmap generation" << Btk::endl;
#ifdef ATI_GENERATE_MIPMAP_HACK
    if(GngGL::getVendor() == GngGL::VENDOR_ATI)
       ctx->enable(GL_TEXTURE_2D); // Bug in many ATI drivers!
#endif
    glGenerateMipmapEXT(GL_TEXTURE_2D);
    }
 
 // Attach texture to framebuffer color buffer
 glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, handle_, 0);

 err_code = glGetError();
 if (err_code != GL_NO_ERROR) 
    {
    char* err_str = (char*)gluErrorString(err_code);
    Btk::logError("GngTools") << "GngFBOTexture [2] GL error: " << err_str << Btk::endl;
    }
    
 // Initialize depth renderbuffer
 if(z_depth_ > 0)
    {
    Btk::logDebug("GngTools") << "GngFBOTexture: Set up depth render buffer: z = " << z_depth_ << ", w = " << width_ << ", h = " << height_ << Btk::endl;
    glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb_);
    GLenum depth_fmt = GL_DEPTH_COMPONENT16;
    /*
    if(z_depth_ == 16)
       depth_fmt = GL_DEPTH_COMPONENT16;
    else if(z_depth_ == 24)
       depth_fmt = GL_DEPTH_COMPONENT24;
    else if(z_depth_ == 32)
       depth_fmt = GL_DEPTH_COMPONENT32;
    */
    glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, depth_fmt, width_, height_);
    
    err_code = glGetError();
    if (err_code != GL_NO_ERROR) 
       {
       char* err_str = (char*)gluErrorString(err_code);
       Btk::logError("GngTools") << "GngFBOTexture [4] GL error: " << err_str << Btk::endl;
       }


    glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb_);

    }
 
 bool result = checkFBOStatus();
 
 // Set rendering to window
 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, prev_fb_);

The FBO status is GL_FRAMEBUFFER_COMPLETE and no GL errors are detected after the setup.

When rendering to the texture, I do the following:


 glPushAttrib(GL_VIEWPORT_BIT);
 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb_);
 glViewport/modelview/projection
 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 
 ... render ...

 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
 glPopAttrib();

… and when applying the texture, simply:


glBindTexture(GL_TEXTURE_2D, handle_);

If anyone can help me with this I’d be extremely happy :slight_smile:

Cheers

First of all: Why are you not using ARB_framebuffer_object?

Is the framebuffer texture bound when you start rendering to it?

I’m using EXT_framebuffer_object because that’s what’s available on that computer :slight_smile:

Should it really matter though? If a card supports both ARB and EXT_framebuffer_object, shouldn’t the driver code be identical for those two? I thought the ARB version was just a formalization of the EXT one?

Hi,

No, the framebuffer texture is not bound when I start rendering to the FBO. I just verified that the binding is 0 before binding the frame buffer.

Long shot…

I seem to remember problems as well with ATi FBOs to do with the depth format.

Try setting it to ‘GL_DEPTH_COMPONENT’ on its own without the numerical value.

I would recommend that you update your driver to catalyst 10.3

one issue is that AMD did not update drivers for laptops on our website on the OEM request; this is changing with the next release 10.3 where you should be able to pick the latest and greatest:

  • support for the latest version of opengl with all extensions available for your hardware. (this HW will not have opengl4.0, but opengl3.3)
  • any potential bug fixes

thanks,

Good news.

AMD did not update drivers for laptops on our website on the OEM request

One day someone may be able to explain me why OEM want to prevent users from driver updates…

One day someone may be able to explain me why OEM want to prevent users from driver updates…

So that they don’t have to field support calls from people who update their drivers. And thus, if you have a bad driver release (see recent NVIDIA drivers that started killing hardware), they don’t have to take any heat from it.

Of course, this means that ATI/NVIDIA take the heat for not making things better. But OEMs don’t care about that :wink:

@Guybrush: If i use only GL_DEPTH_COMPONENT with no depth value, it fails with a GL_OUT_OF_MEMORY error.

The reason I haven’t upgraded the drivers on that computer, is as mentioned above, that ATI doesn’t provide drivers for laptops, and the latest one from Fujitsu-Siemens is from 2007 or thereabouts.

Now, if ATI will provide updated drivers for laptops, that is really good news.

This is getting a bit off topic, but I have often wondered how people generally handle these driver issues. How do I determine if a user has a bugged driver? Do you test it on all possible driver versions and make special code to handle each bug or known quirk in these drivers?

Is there a way to find the actual driver version btw? The GL_RENDERER string will apparently return different things from one driver version to the next. Some drivers return GL version and driver version, wheras others only return the GL version. ATI seems to have GL version and driver version. Earlier NVIDIA drivers had driver version too, I think, but now it seems to be only GL version.

Cheers.