Possible NVIDIA FBO bug?

I’ve got a piece of code used for high-res screenshots and I’m getting artifacts between tiles (visible borders), but only for 4096^2 tile sizes. I created a standalone app that just does one tile and the problem’s evident there too. The relevant code is:

    // Buffer ids.
    GLuint uFramebuffer;
    GLuint uColourbuffer;

    // Generate buffers.
    glGenFramebuffersEXT(1, &uFramebuffer);
    glGenRenderbuffersEXT(1, &uColourbuffer);

    // What are the implementation-defined maximum renderbuffer dimensions? On my GF5200 it'll be 4096.
    int nRenderbufferSize(0);
    glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE_EXT, &nRenderbufferSize);
    // If you force the size down to 2048 the problem disappears.
    //nRenderbufferSize = 2048;

    // Bind the application's framebuffer.
    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, uFramebuffer);

    // Attach a colour renderbuffer image.
    glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, uColourbuffer);
    glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA, nRenderbufferSize, nRenderbufferSize);
    glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, uColourbuffer);

    // Make sure it worked.
    assert(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT) == GL_FRAMEBUFFER_COMPLETE_EXT);

    // Set the draw and read locations.
    glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
    glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);

    // Make sure the framebuffer's clean.
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);

    // Pixel storage, doubleword aligned.
    GLubyte *pPixels = static_cast<GLubyte *>(_aligned_malloc(nRenderbufferSize * nRenderbufferSize * 4, 32));
    assert(pPixels);

    // Get the framebuffer pixels.
    glReadPixels(0, 0, nRenderbufferSize, nRenderbufferSize, GL_BGRA, GL_UNSIGNED_BYTE, static_cast<GLvoid *>(pPixels));

    // Save pixels in a Bitmap.
    Gdiplus::Bitmap bitmap(nRenderbufferSize, nRenderbufferSize, 4 * nRenderbufferSize, PixelFormat32bppARGB, pPixels);
    CLSID jpgClsid;
    GetEncoderClsid(L"image/jpeg", &jpgClsid);
    Gdiplus::Status stResult(bitmap.Save(L"FBObug.jpg", &jpgClsid, NULL));
    assert(stResult == Gdiplus::Status::Ok);

    // Free mem.
    _aligned_free(pPixels);

    // Bind the system's framebuffer.
    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

    // Clean up.
    glDeleteFramebuffersEXT(1, &uFramebuffer);
    glDeleteRenderbuffersEXT(1, &uColourbuffer);

so nothing extraordinary. It doesn’t matter if I use GDI+ for saving images or a proprietary library (LEADTOOLS). I’ve viewed the image created using IrfanView, Gimp 2.2, ERViewer and some MS utilities and there is a visible band of garbage pixels along the RHS border. Maybe I’m just missing something really obvious? I created some sample code here if you want download, compile and test the problem. It does the same thing using FBOs and pbuffers and shows FBOs have the problem while same-sized pbuffers don’t. You’ll need GDI+ (i.e. WinXP or a free download), GLEW stuff’s included and I built with VS2005 so you’ll need to set up your own project if you’re using another compiler.

Thanks in advance.

I forgot to mention, I’ve tried this with 3 recent beta drivers, currently 84.56.

Update: I tried the same thing as above but using texture framebuffer attachments and glGetTexImage instead of a renderbuffer attachment and glReadPixels and the problem doesn’t exist, which reinforces my thinking that it’s a driver bug. I updated the zip file with the texture method too.

Try to do a raw image dump and read it with Gimp/Photoshop. I wouldn’t use JPGs for this kind of testing.
What about other non-power-of-two buffer sizes, does the problem exist there aswell?
I have had the same experience with glReadPixels and glGetTexImage not returning the same exact data. I hope for you this is not another “quality” feature only available on Quadro cards…

OK. I updated the project. The new code is a VS2003 project. At home on a 6800GT the problem still exists and happens with framebuffer/renderbuffers, framebuffer/texture attachments and pbuffers all at 4096^2. I create PNG files and raw BGRA data. IrfanView will view the raw data. All of the images are corrupt on a boundary. A 1-pixel-width RHS boundary for both framebuffer methods, 1-pixel-width top and RHS for pbuffers. Forcing to < 4096 removes the problem. Even 4095^2 works. I wish NVIDIA would return a different number for GL_MAX_RENDERBUFFER_SIZE_EXT, WGL_MAX_PBUFFER_WIDTH_ARB AND WGL_MAX_PBUFFER_HEIGHT_ARB because as it is I’ll have to put special case code in to handle the problem until it’s fixed in drivers :frowning: .

Hi ffish,

I need to implent an algorithm similar to yours to create bitmap images from my OpenGL scene.

Are you still happy with this method or it is better to switch to something else? like pBuffers, or Memory DCs?

Thanks a lot,

Alberto

I’ve got something strange too on FBO with nVidia under Linux:

Doing shadow volumes without any use of FBO (for other stuffs) result in artifacts when the shadower has some wholes in it (as anyone know, this is normal). But when I enable FBO for doing other stuffs, the artifacts become enormous (big black squares everywhere).

I get the same thing. Im using FBOs for cube shadow maps on linux (does the same under windows btw). I get some kind of checkered (or just garbage) patterns on some of the unupdated cube faces.

I get the impression that updating one of the faces has a nasty habit of corrupting the other faces.

-Twixn-