Framebuffer objects are a mess under Leopard

I’ve given up on using Framebuffer objects (FBOs) in my app to save large versions of 3D views. There are too many limitations, and too many bugs.

Some limitations:

  1. Not all macs support FBOs at all. You have to check for the FBO extension at runtime, and offer another option if they are not supported.

  2. For those machines that do support FBOs, the size of the renderbuffer object you can attach varies greatly from machine to machine, so you can’t be sure how big an FBO you can create until runtime.

  3. The max renderbuffer size that’s reported by the driver doesn’t work. At a little over 1/2 of the reported max size, the image you get back is black. At a little larger than that, it locks the machine up to the point where you have to do a forced shutdown. I could not figure out a reliable way to figure out what the threshold is at runtime.

  4. Even when you request a renderbuffer that the system can handle, the machine goes to it’s knees while it is setting it up and rendering it, because it takes up so much of the system’s VRAM.

I’ve just gotten finished ripping out the FBO code in my app and implementing an approach that renders a series of tiles to the back buffer and assembles them into an image in main memory piece by piece. Thanks to Relic for pointing me to an open-source library that does pretty much what I needed.

Thanks too to Brian Paul, the author of that library, called “TR, the OpenGL Tile Rendering Library”. Seeing how he did it was very helpful.

I didn’t want to release my app as open source, so I studied the TR library and wrote my own code that does much the same thing.

The glReadPixels call is more powerful than I realized. If you use the calls:

glPixelStorei(GL_PACK_ROW_LENGTH,	save_width);
glPixelStorei(GL_PACK_SKIP_ROWS,	dest_y);
glPixelStorei(GL_PACK_SKIP_PIXELS,	dest_x);

you can assemble image tiles from VRAM directly into the image data of an NSBitMapImageRep in main memory, without having to do any extra stitching.

You have to set up your NSBitMapImageRep so it doesn’t add any padding at the end of a row (by specifying bytesPerRow: as the exact count of bytes needed to store a row of pixels), and then specify a GL_PACK_ALIGNMENT of 1, but it works great.

I also had to give up on stretching the viewport larger than the dimensions of the window/renderbuffer in order to create a larger image. The viewport is silently clamped to a fairly small value, and your image gets distorted if you exceed that value.

My new code manipulates the projection frustum in order to create a large image. That’s actually pretty straightforward. You just set the left/right and top/bottom of the frustum to zoom in on the tile that you’re getting ready to render, using the same ratio of tile frustum/view frustum as the tile size/view size.

This might be old hat to most of you, but I spent quite a while pulling out my hair, figuring out how to make this work.

Anyway, I’m now able to reliably create HUGE 3D images and save them to disk as JPEGs or TIFFs.

The limitations you describe aren’t specific to FBO. You’ll have to deal with differences in MAX_TEXTURE_SIZE or MAX_VIEWPORT_DIMS across hardware anyway. And creating a huge renderbuffer is subject to the same limits and performance problems as creating a huge texture, or just a huge malloc in general. An 8192x8192 renderbuffer (or texture) is 128 megs, if you use a 16 bit internal format like 5551. Or 1 gigabyte, if you use a float rgba internal format. Obviously some systems can’t accommodate this request at all, and some will work but at reduced performance.

The limits GL exports like MAX_TEXTURE_SIZE are not indications that every request you can ever possibly make will succeed. If MAX_TEXTURE_SIZE is 8192, it only tells you that 8192 will work, for one dimension, in at least one internal format.

arekkusu,

What you say makes sense. What I need, though, is a way to tell what will work on a given machine at runtime.

The memory situation on the video hardware is pretty opaque. I am creating big structures (very large polygon meshes), and want to render them as large as I can.

At first I thought FBOs were a good solution to rendering large 3D images. It turns out they have real limits, and limits that are very hard to predict. For my application (saving a large image to disk) tiling my image and reading the tiles with glReadPixels is a much better solution.)

I’m a longtime developer, but very new to OpenGL. I’m still trying to learn my way around it. I’m about to post a memory related question about predicting how big a polygon mesh I can create.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.