cost of texture management

It used to be the common wisdom that it is much more efficient to create one big texture, and pack it with multiple smaller images, using TexSubImage, rather than creating a new texture object for each separate image. Is this still true today? I mean, we have NPOT textures, so space wasting is a no issue anymore. Plus, creating/destroying textures should be a similar problem to memory management. And a couple “new or delete” operations really should not count on today’s systems. Hopefully recent drivers do a good job at managing memory. Any thoughts on this? I’d really love to hear some info from IHVs on this topic!

Andras

The issue is not that the drivers don’t handle hundreds of tiny textures, it’s the glBindTexture calls and associated work to make that happen while rendering the few geometry elements using them.
Instead of that a texture atlas with e.g. lightmaps of the scene or skin of a character is selected once and the whole geometry is drawn in one batch. Much faster!

Hmm, good point, I totally forgot about that :slight_smile: What’s the price of a texture switch? I’d guess it invalidates the texture cache, and flushes the rendering pipeline. Or is it even worse?

It’s not the texture switch, it’s the fact that you have to draw more batches to enable you to switch textures. With an atlas you only have a single draw call for polys that use more than one texture.

Originally posted by andras:
Plus, creating/destroying textures should be a similar problem to memory management. And a couple “new or delete” operations really should not count on today’s systems. Hopefully recent drivers do a good job at managing memory. Any thoughts on this?
Although the new/delete op may be fast in itself, the memory bandwidth required by those operations (update texture to card) has prohibitive costs. You should really not gen/delete textures while running “the game”.
This is another reason to not do it.

The other reason referred to (batching) is also very important as already pointed out.

Ok, I’ve looked at texture atlases, but they all seem to be tools/algorithms for offline atlas generation. In my app, I need to generate new textures of various sizes on the fly, and pack them into a big texture somehow (and then free them after some time)… The packing itself doesn’t seem very difficult, but when I start to delete stuff, it will start to fragment. Any pointers to papers/algorithms on how to solve this efficiently? Hmm, or maybe is there a library even?

Thanks,

Andras

Originally posted by andras:
…The packing itself doesn’t seem very difficult, but when I start to delete stuff, it will start to fragment.
From the top of my head, how about you just allocate one big texture that will always be used for packing. Create a wrapper that manages that texture. When you need to pack a texture, request the amount of size from the wrapper, if it has it, it will give it to you. When you no longer need it, ask the wrapper to release the texture. The wrapper should manage all the available texture blocks that can be used. If for a request it can’t find enough space, it should defragment the entire block, packing all the textures together. That should create space for your request. If still not available, it can generate another big texture.

If you restrict yourself to POT-textures you should be able to fill out the fragmented texture fairly easily. If not, it’s indeed a classical problem and you need to find some heuristic of when to defragment that will fit your problem :slight_smile: