Do drivers cache local copies of textures?

I’ve heard that they can do this.

I’m moving very large floating point textures to and from the GPU (we’re talking 16 - 64 megs per texture) and I’m concerned about the driver keeping local copies of the textures in AGP which wastes a great deal of RAM. This is not for game programming, but rather an ‘alternate’ use of the GPU.

As far as I understand it, OpenGL drivers are free to keep a local copy of textures in AGP or RAM, in case the texture on the card needs to be dropped when card memory ‘overflows’.

Any suggestions? Do some of the newer extensions like PBOs allow me to write directly to graphics card without filling up AGP with an image of the texture?

If you do not use texture objects, then opengl will not retain a copy of the texture. At least as far as I know, someone correct me if I am wrong.

The driver still needs to keep a copy of texture object 0, even if you don’t use any other texture objects. The reason is that Windows can decide to re-configure the display at any time (say, the user changed the display control panel resolution). The driver will be told “hey, you lost this memory” and the API requires that the driver cope.

Thus, the OpenGL API, coupled with the low-level Windows driver details, make it so that all textures HAVE to have a back-up copy in RAM (not necessarily AGP; could be paged). (Unless you split your VRAM in separate texture and frame buffer areas, like some “higher end” architectures have been known to.)

Originally posted by jwatte:
[b]The driver still needs to keep a copy of texture object 0, even if you don’t use any other texture objects. The reason is that Windows can decide to re-configure the display at any time (say, the user changed the display control panel resolution). The driver will be told “hey, you lost this memory” and the API requires that the driver cope.

Thus, the OpenGL API, coupled with the low-level Windows driver details, make it so that all textures HAVE to have a back-up copy in RAM (not necessarily AGP; could be paged).[/b]
Hmmm not true, the driver can page the textures in vidmem to sysmem/disk at one side of the mode change and page them into vidmem again at the other side, once the local & frontbuffers have been allocated. With this method there’s no copy of the textures.

Maybe the confusion comes from the fact that there’s no “hey you lost your memory” message, actually a callback DrvAssertMode inside the driver is called before the mode change and again after it.

The need to have copies of textures is more dependent on whether the driver is architected to see vidmem as a cache or as any other piece of memory. There’s no real need to have copies of textures, it’s an architectural decision.

One thing that may force the driver to have a copy of the user-supplied texture for performance reasons, is if a region of the texture is updated dynamically by the app and the graphics card only supports a given region alignment update, so it needs to get the original texels to pad the subtexture update.

Braindead apps which do glGetTexImage frequently may also force the driver to keep copies around.

Hmmm not true, the driver can page the textures in vidmem to sysmem/disk at one side of the mode change and page them into vidmem again at the other side, once the local & frontbuffers have been allocated. With this method there’s no copy of the textures.
That assumes that the only time textures can be lost is on a mode change. When running Windowed, for example, they could be lost just by some other application being brought to the foreground.

Also, coping the texture back is a serious performance problem. Imagine copying a 128MB (or more) block of memory over a 33MHz 32-bit bus.

[Q]That assumes that the only time textures can be lost is on a mode change. When running Windowed, for example, they could be lost just by some other application being brought to the foreground.[/Q]

Why would this cause a loss of texture?

I’m not sure if you mean any other program or another GL program.

Even if there are multiple GL programs open, the driver should store all the combined texture and whatever in video memory.

Or maybe PCs video cards should switch to unified memory, but it looks like the future is PCI-Ex and nothing else.

maybe PCs video cards should switch to unified memory

There are cards out there that use unified memory (where unified means the CPU and the rasterizer share the same memory). Radeon IGP parts and Intel Extreme Graphics are some examples. By and large, they all have terrible performance. The reason is that general-purpose CPU memory will get you a throughput of about 8 GB/s best case, whereas GDDR3 solutions are running at about 4x that speed. The needs of a CPU are very different from the needs of a GPU, so unifying them into the same bottleneck seems like it would be a step backwards.

I suppose it is a step backwards.

What is the situation on *nix systems in terms of display change, switching tasks and so on.
*nix driver model is superior compared to Windows or what? OSX?

Originally posted by V-man:
[b][Q]That assumes that the only time textures can be lost is on a mode change. When running Windowed, for example, they could be lost just by some other application being brought to the foreground.[/Q]

Why would this cause a loss of texture?
[/b]
It doesn’t, this is just Korval again theorising on wrong grounds. The driver doesn’t “lose” anything, either it evicts the memory to sysmem, or consciously replaces the vidmem with the required texture.
In the latter case, textures have always a “backing store” in sysmem (so all textures in vidmem are actually copies) and the vidmem acts just as a cache, but then you have the typical redundancy problems for which you have to keep dirty page bits in case the texture was modified while in vidmem and your backing store needs to be updated.
At any rate the driver is in total control of what happens and can decide what’s best.


Even if there are multiple GL programs open, the driver should store all the combined texture and whatever in video memory.

Exactly. And if it can’t, it just uses any replacement policy it sees fit to find candidate textures to evict/overwrite while keeping the rest of the vidmem at full utilisation.

Or maybe PCs video cards should switch to unified memory, but it looks like the future is PCI-Ex and nothing else.
Actually the future is virtual memory, where textures are paged into vidmem and evicted to sysmem as needed (unless you know that a given resource is going to be used only once, in which case a fetch from locked sysmem is more desirable).