Texture swapping time hit

The general rule is to change the texture as little as possible, but why exactly is that? If I have a 32 MB card, with 16 MB devoted to textures (at least that is how the old Voodoo 2 did things) and my whole program uses 5 MB of textures, how much of a hit is it to swap textures? I assume they will all be loaded on the card which gets rid of a major bottleneck.

It’s because of the texture cache. If you use the same texture again for your next polygon most of it will be in the cache so it’ll lower the memory bandwidth a lot.

How big is a standard cache? If a texture is much larger than the cache, wouldn’t the performance be the same as swapping between two smaller textures that were stored on the card, since the large texture would pollute the cache, just like the two textures?

By pollute, I mean that the cache could not be used “as is”.

I have not seen any speed improvements from
sorting the textures and minimizing the binds. However, since I’m a newbie, I’m still wondering if I should sort or not. Have any of you actually measured a performance hit associated with texture swapping?

Have any of you actually measured a performance hit associated with texture swapping?

Recently I’ve done a little hack in my 3d engine:
Instead of binding the texture for each triangle, I store in a variable the last binded texture id, and use glBindTexture just if the new texture is different from it.

I know, very crappy, but I’ve passed from 90 fps to 130 fps, using my G400

funny, I did the same thing the other day.
Didn’t gain anything, not even 1 fps.
I find it hard to believe you jumped from 90 fps to 130 fps by just using this simple trick.

Originally posted by beavis:
funny, I did the same thing the other day.
Didn’t gain anything, not even 1 fps.
I find it hard to believe you jumped from 90 fps to 130 fps by just using this simple trick.

The performance gain will probably be minimal as long as there is no trashing. When you create more textures than your card can hold, the speedups from sorting will be quite huge (as confirmed by Andrea’s post).

BTW, you might also consider reversing the rendering order after every frame, to minimize trashing even further.

Originally posted by beavis:
funny, I did the same thing the other day.
Didn’t gain anything, not even 1 fps.
I find it hard to believe you jumped from 90 fps to 130 fps by just using this simple trick.

What video card are you using? Perhaps the matrox are slow when doing texture binding…

Anyway days ago, in this list, someone said to avoid too much texture binding, and just followed that advice…

OK. I’m rendering an average bsp map, at any given frame I have roughly from 120-500 polygons, all using textures and lightmaps. The map contains around 40-50 textures total.
I do binds for every single polygon.
Now I investigated the dump from my rendering function and noticed many textures used repeatedly in a row, so I added the little trick we talked about. I have not seen
any improvement.

I tested on TNT 2 Model 64 and GeForce 2 GTS.
From what you’re guys saying I reckon I may be seeing an improvement on cards with less texture memory. Both cards I tested on have 32 megs…

thanks

a 32 MB card, with 16 MB devoted to textures (at least that is how the old Voodoo 2 did things)
All the memory can be used for either textures or frame buffers, even on Voodoo3s.

[This message has been edited by Don’t Disturb (edited 11-13-2000).]

First: perhaps some drivers do the “what’s
the current texture” optimization internally
and others don’t. Thus, this optimization
may gain you some on certain drivers, and
nothing on others.

Second: the voodoo 1 and 2 had specific
areas used for frame buffer (4 MB) and for
texture memory (8 MB). (numbers for a 12 MB
voodoo2). Not until voodoo3 did they unify
the memory, AFAIK – that being the biggest
difference between a voodoo3 and two voodoo2
in SLI mode (unless you count 2D support :slight_smile: