large textures

Question: what happens when you try to create a whole bunch of texture objects but not all of them can fit in your video memory at the same time? Does an error occur, or does OpenGL simply create these “overflow” textures in system memory?

Reason I’m asking is that I want to create a very large image viewer (no zoom, ortho mode, user will be able scroll it), able to view images as large as 16kx16k pixels. I can’t create a texture that large, so I chop it up into smaller tiles, say 1kx1k, and then display whichever tiles are necessary.

If I create these tiles at startup, and they can’t all fit into video memory I want to know what happens. If some do go into system memory, I suppose the user will notice some slowdown in scrolling as data from system memory gets moved to video memory. Any ideas on how to minimise this problem if it is a problem?

That isn’t specified by the standard. If the driver cannot allocate the memory it will generate a GL_OUT_OF_MEMORY error.

I think most drivers should be able to use system memory for textures, and will stream them over the AGP/PCI-E bus, which is much slower than from video memory. You can use glPrioritizeTextures to hint to the driver which textures should be in video memory, and glAreTexturesResident to determine whether textures are resident in video memory.

It may be better to keep each tile as a compressed image (e.g. PNG) and only keep a smaller window around the viewport as an uncompressed texture. That will make a huge reduction in memory usage and probably give you better control over the swapping.

OpenGL should not report any errors. Of course you can’t be 100% sure, because driver-quality varies, but i am pretty sure, this should be implemented well enough, because it is a common problem. Doom3, for example, might use more than 256 MB of textures (or was it even more than 512?), that’s what some hardware vendors used to demonstrate, that their ultra-much-memory gfx cards work faster.

Of course, when the driver swaps textures in and out, lags occur. You could try to minimize those by

  • using pretty small textures (for example 512*512), that should make the drivers life easier to find free memory and it is less data to swap, if not the whole tile is visible.
  • binding adjacent textures, although they are not yet visible. This way you force the driver to upload those textures, so that, when scrolling, not all adjacent textures need to be uploaded in one bunch.
  • fiddling with mipmap bias and such, so that the driver does not need to upload all mip levels, if you have zoomed out. Although i’m not sure, how well drivers actually do that, you might be forced to manage mipmapping yourself.

Hope that helps,
Jan.

Jan wrote:

binding adjacent textures, although they are not yet visible.
That’s quite an ingenious idea. Anyway, I have some good suggestions from this thread and another one I posted at gamedev, so I have plenty to play with. Thanks guys.

Originally posted by Jan:
OpenGL should not report any errors. Of course you can’t be 100% sure, because driver-quality varies, but i am pretty sure, this should be implemented well enough, because it is a common problem. Doom3, for example, might use more than 256 MB of textures (or was it even more than 512?), that’s what some hardware vendors used to demonstrate, that their ultra-much-memory gfx cards work faster.
One problem is that all textures used simultaneously (i.e. during a single draw call) must fit into GPU-addressable memory. Thus you might be able to create 16 4096² 32-bit textures (1 GiB) because the driver uses system memory, but using them all in the same shader can result in incorrect rendering.

* binding adjacent textures, although they are not yet visible. This way you force the driver to upload those textures, so that, when scrolling, not all adjacent textures need to be uploaded in one bunch.
This might work for some drivers, but there is no requirement that binding a texture without using it forces a texture upload.

So what OpenGL function forces a texture upload?

So what OpenGL function forces a texture upload?
Any that forces GPU to sample from texture. That mean’s more less rendering something. You could have 1x1 FBO for that purpose.

You can use glPrioritizeTextures
AFAIK texture priorities are not supported anymore. Driver does it on it’s own, and calling glPrioritizeTextures has no effect.

Originally posted by cragwolf:
So what OpenGL function forces a texture upload?
You could simply render a small quad at first, and then render everything on top of that, so that the quad won’t be visible anymore. This way there is no trick whatsoever, that the driver could know of, to optimize it away and not upload the texture to the GPU.

By setting the lod-bias you should be able to make sure that certain mipmap-levels are resident in the GPU, if you have use for such a feature.

Jan.

Originally posted by Jan:
You could simply render a small quad at first, and then render everything on top of that, so that the quad won’t be visible anymore. This way there is no trick whatsoever, that the driver could know of, to optimize it away and not upload the texture to the GPU.
A deferred renderer can actually optimize that away. :wink: