Mipmapping

I’m working on some kind of image viewer which will later also have some 3d views. Since there could/will be many large textures at once at the screen (at least thumbnails of many large textures) I’m not sure if I should use mipmapping.

Let’s assume I have 100 thumbnails (~70x70) of images which are in real size around ~1600x1200 or more on my screen. Can I throw these 100 big images at opengl and let opengl do the work (creating my thumbnails via mipmapping) or would I ran out of memory? Or let’s say I have 1000 thumbnails of big images at once on the screen. Does opengl put the images back into system memory / hard disk if it runs out of memory?

Or do I have to handle “mipmapping” by myself - which means that I only throw scaled down images at opengl.

As textures are not virtualized, the video card need a complete mipmap chain in board memory.
You may take advantage of hardware assisted mipmaps, but for one image at a time : upload full version, then do a glGetTexImage() to retrieve the mipmap level you will actually need. Then you can draw all your thumbnails without mipmapping.

Different GL implementations can behave differently.
Yes, it might move it to RAM and then virtual memory and I’ve had that happen to me. Then again, there is a chance you will get GL_OUT_OF_MEMORY

I wrestled with a similar problem in a world editor I’m tinkering with. Letting the user browse hundreds or even thousands of textures in a folder can be tricky, especially if they’re viewed at full resolution (which is nice if you’ve never seen them before).

If, like me, you like to keep a copy around on the CPU for various things (which is not at all necessary), be warned that the GL does too, so you’re looking at double redundancy here.

Short of trusting the driver to make things work for you automagically (and that you have infinite memory reserves), you could create your own working set and page textures into and out of that set on the fly. For example, suppose the user opens a folder with 300 4096x4096 textures in it. Yikes! You might then resort to platform specific measures (threading, file streaming, GDI, etc) to load and render your images and handle the whole shebang on the CPU, then commit to the GPU if and when the image is actually used for 3D rendering in another window, punting your CPU copy in the process. You could also use the same working set idea on the GPU, only you’d be streaming your data into preallocated mipmap pyramids yourself as needed.

If you know that you’ll only be viewing thumbs of 70x70 or so, then you might just as well store them at that resolution (requires non-power-of-two hw or padding and texcoord adjustment), only loading the full res image when it’s actually needed.

I think for the editor scenario, managing a “reasonably sized” texture cache yourself, either on the CPU or GPU, may be the way to go, rather than having the driver frequently jostle things around in memory for you, just to keep a few (potentially huge) ephemeral textures in toe. But that of course depends on your app’s particulars and you target platform(s).

My 2 cents:

If you are using the thumbs for browsing, keep them seperate from the main image.

Heck, if I were you, I would require the dataset to include a prerendered jpeg thumbnail file along with the main image, just so you can use it in the browser panel without actually loading the main image it references.

Whether or not you do mipmapping on the main images should depend only on whether or not you plan to render them at different zoom levels and whether you care about fidelity.

Thx for the input. I guess I will have to go with plasmonsters way. Since my interface is free zoomable I can’t just store the thumbnails.

I will store 3 thumbnail sizes in my Database and load them on demand. Everything bigger than that will be generated on the fly in a seperate thread.

ZbuffeRs idea sounds also gread, I think when I first see a new image I will let OpenGL generate some mipmaps and use them as my thumbnails which I save in my database.

Thx!

That makes a lot of sense. Then you can just grab the MIPmap level(s) you need based on the pixel real-estate alloted to each pic (which will vary between a 1024x768 display and a 2048x1536 display, even for the same %-width display window). Only takes 33% more space. There are image formats already out there that support storing MIPmaps in the same file.

Another desirable feature is for the data within a MIPmap level stored in the file to be “chunked” so you can load a piece of a level without having to load the whole thing (especially advantageous in GIS, where single images may be [for example] 30,000 x 20,000 texel images that are 1.2GB each). Various formats out there also support compression within chunks as well as stored MIPmaps, so you get compression and direct access to pieces of individual pre-computed MIPmaps, which means less memory consumption and time spent in disk I/O. And in the limit, means you can actually display the image rather than core dumping with an out-of-memory error.

Check out tiled TIFF, IMG (Imagine), and ECW (Enhanced Compression Wavelet) for some ideas.