View Full Version : Saving large images to a file using OpenGL Imaging Subset

11-14-2002, 07:14 AM
My company is working with image files > 2GB. We breakup the image into tiles with minification levels for viewing purposes. We also use the Imaging Subset to enhance the image during viewing. I would like to use OpenGL to enhance the entire image (not just viewed tiles) and then save the enhanced image to an image file. Has anyone developed is scheme for processing extremely large images through the Imaging Subset? Should this be done using a tiling technique and then mosaic the tiles together? Thanks for any advice.

11-14-2002, 08:08 AM
You say nothing about your platform.
On SGI systems, which have a better memory architecture and a lot of bus bandwidth, you might have the option of using main memory to store very large images, and still have OpenGL hardware process it. Then there could possibly be a way to have OpenGL hardware chew through very large images without tiling them.
On PC systems, what acceleration there might be of the imaging subset takes place only on images that fit in the local framebuffer and texture memory, which is currently a lot less than the GB range you are dealing with, even on high-end cards.
Note that the imaging subset is currently not accelerated on most consumer hardware. Pixel shaders do provide the hardware even on some lower end cards, but it's not yet in the drivers.
I don't know exactly what you are doing, but my guess is that you won't find OpenGL hardware to process your entire image in a single chunk, and since you must cut it up into tiles anyway, you might just as well cut it up into small tiles. The only thing to watch out for is edge effects in convolutions: you need to include border pixels in two tiles, and corner pixels in four tiles, but they comprise only a small fraction of the total image data, even if you cut the image up into small 256x256 pixel tiles.

11-19-2002, 12:15 PM
Yes you need to tile and call glReadPixels.
Even on the IR4 SGI you will be limited
to 4k viewports. Im pretty sure this is the
limit for pbuffers as well.

In the area of tiling Chromium is an interesting project-
Involves tile rendering across clusters.