locking textures?

Is there any way you could ‘lock’ the texture in vidmem so it would’t be necesary to transfer it from main memmory every frame. It would be very useful if I didn’t had to switch(upload) textures 100 times each frame. Since I have only for about 2mb of textures space(vidmem) wouldn’t be a problem.

Locking textures is not possible (that would imply, that e.g. if you have a 2MB texture locked with 2MB free video RAM, that primitives with other textures could not be rendered correctly).
You can, however, look into glPrioritizeTexture(), that gives you the possibility of setting different priorities for different texture objects.

Sort your rendering by texture, to minimize texture uploading. Thats the only way around it.

That should theoretically keep all uploads to a minimum.

Theres another trick to. Alternate the order of what gets rendered each frame. For example if the last thing you render on frame N is the player, then on frame N+1, render the player first, as his textures are still in VRAM.

Nutty

Provided you’re using texture objects (which have been around since 1.1) textures should be kept in vidmem automatically between frames if there’s enough space.

Are you actually calling glTexImage2D per texture per frame? If so, then you’ll see a tremendous speed boost by using glGenTextures to create texture objects (GL 1.1) and glBindTexture to switch between them. Beyond that, then yeah, glPrioritizeTextures and texture sorting are the way to go.

Color me stupid if need be but I have 64Mb of texture RAM on my video board. If I have less than 64Mb of textures doesn’t that mean that i don’t have to worry about such issues?

I assume that by texture memory you mean total video memory (memory on your graphics card). Video memory is not only used for textures, but for the frame buffer, including the back and front color buffers, z buffer, optional stencil buffer, etc. When running at a high resolution and bit-depth, these really add up. e.g. 32-bit front & back color buffers, 24-bit z buffer and 8-bit stencil buffer at 1280x1024 adds up to almost 16MB (if I’ve done my math correctly), leaving your 64MB board with “only” 48MB for textures, and whatever else your board might do with that memory.

If you’re sorting by texture as you should be, I don’t honestly think that manually setting priorities is going to buy you very much. If you need to load a texture in a given frame then you need to load it, period; deciding which vidmem-cached texture to lose is probably a job best left to the driver.

The only exception would be if you know that a particular texture currently in vidmem is NOT going to be needed next frame, and I doubt that happens often enough to justify the coding effort involved.

Iceman - maxuser’s right, and that’s BEFORE you even get into FSAA, which can easily quadruple the memory needed for back, depth and stencil buffers…

MikeC – I somewhat disagree with your claim: that texture eviction ought to be decided purely by the driver. Carmack had a pretty detailed .plan about his general dissatisfaction with the texture management capabilities of OGL, and even suggested “virtualizing” texture memory addresses. And he gives a pretty good example where an intuitive but naive texture swapping scheme (least-recently-used) can cause horrendous texture swapping. For example, you have a large texture that is used every frame (a terrain or sky, for example), but not enough texture memory for all of the relevant textures in a scene. Each frame, this big texture gets reloaded and evicted (as does every other texture), but with good prioritization, you can force the big texture to hang around each frame, thereby avoiding unnecessary texture swapping.

So assuming that drivers actually respect texture priority (dubious, unfortunately), you can use this behavior to your advantage. You can use a big composite texture that has all of your frequently used textures in a single texture, and you index different ones by using different texture coordinates. There are some issues to this (mip-mapping, blending, no repeating), but alot of times you can get away with it. Lightmaps, for example.

–Won

… adds up to almost 16MB (if I’ve done my math correctly), leaving your 64MB board with “only” 48MB for textures, and whatever else your board might do with that memory.[/b]

Your math is right, actually tht total count of your example would be 15,728,640 bytes. I wrote a little MS Excel spreadsheet to compute this so if anyone wants it let me know here. I dont have a website available to post it on so i can email or whatever …

Thanks for the great tips folks!
Paul Leopard

TNX to all of you… and another question… why does binding texture costs so much if it’s in vidmem. Does anyone have some techical explanation for this. (sorry for bad english)

Won - I agree with everything you say, but don’t see how it’s an argument for manually setting texture priorities. AFAICS the best driver behaviour in a swapping scenario is to evict any textures which haven’t been used either in this frame or the previous one, then start evicting by MRU. The virtual texture idea is a great one (anyone used one of the 3Dlabs cards that do this?) but it looks like a separate issue to me.

MikeC – my point was really just that the driver isn’t “psychic” and really doesn’t have a very good perspective about which textures in memory are likely to be used next. The purpose of texture priority is to give more power to the application which can have a much better idea about what is going on.

DW – I’ve wondered the same thing myself. I suspect it has to do with the same reason why every state change can be costly in GL. GL is implemented as a pipeline, and such a major state change (one that prompts the client to check whether the textures are resident, etc) can cause a stall. You’ll have to wait for mcraighead’s spring break to finish for a more complete/detailed answer.

–Won

Originally posted by DarkWIng:
TNX to all of you… and another question… why does binding texture costs so much if it’s in vidmem. Does anyone have some techical explanation for this. (sorry for bad english)

Because binding a new texture invalidates your texel cache? IIRC, your card has some on-chip memory that it uses to cache recently referenced texels. If you were to call glBindTexture(), I guess this cache would be invalidated.

I believe that NVidia’s drivers actually check if your glBindTexture() call is redundant or not, so on those cards there shouldn’t be much of a penalty unless your texture isn’t resident anymore.

  • Tom