Conceptual : Big animated textures, howto ?

Hi everybody.

In a little project under PC, I would like to create a 3D scene inside a tall cube, whose 6 faces would be used to display 6 different hi-res animated textures. (1024×1024, or better : 2048×2048 if possible…)
With my naive analysis, I evaluate the needs to 6 textures multiplied by the number of animation frames (let’s say, about 40 for each texture) in the vram, with 4 bytes per pixel, so that means 6×1024×1024×40×4 = 960MB. Am I wrong ?

That is why before starting to code anything, I’m coming here to ask about the possible hopelessness of this idea, and if there is a trick, what could it be ?

I probably need a better understanding of the texture storing mechanisms, and a view of the graphic cards abilities regarding openGL. Can somebody tell me what is the usual number of textures, and what are their sizes, stored in vram in standards games ?
I read that Opengl can compress textures, but I’m unsure if it is done in the vram itself, is it ? If so, is a decompression required before displaying the texture ? And is there some compressions more standard than others that are available under any hardware ?

Thans a lot in advance for advices.
Cheers.

Video compression. Typical video formats (e.g. h264) offer compression ratios of 100:1 or better compared to a sequence of raw images.

The main issue is that decompressing video is complex, and even more so if done on the GPU. But the memory requirements for uncompressed video mean that compression is typically mandatory for anything beyond the simplest cases.

[QUOTE=Catelite;1288419]…cube, whose 6 faces would be used to display 6 different hi-res animated textures. (1024×1024, or better : 2048×2048 if possible…)
With my naive analysis, I evaluate the needs to 6 textures multiplied by the number of animation frames (let’s say, about 40 for each texture) in the vram, with 4 bytes per pixel, so that means 6×1024×1024×40×4 = 960MB. Am I wrong ?[/QUOTE]

No, given your assumptions, you’re correct.

However, consider: for a 3D cube map view, you’re probably not using alpha, so 25% of that is wasted. Also, even that can be reduced greatly through the use of GPU compressed texture formats. At the limit, if you use DXT1 you can get an average space consumption of 1/2 byte per texel. So that takes your estimate down to: 102410240.5640 = 120MB. There are other, newer compressed texture formats as well you could consider.

But using GPU compressed texture formats probably only makes sense if your cube maps are pregenerated. Are they? I’m assuming so because otherwise why would you be concerned about the total space consumption on the GPU of a series of frames.

Also, keep in mind that depending on the required bandwidth, you can stream your data to the GPU dynamically from CPU memory and potentially from disk, avoiding the need to house all of the texture data on the GPU at once (if that’s otherwise impossible).

Can somebody tell me what is the usual number of textures, and what are their sizes, stored in vram in standards games ?

It’s totally a function of what your application needs, and what types of textures you use. Single high-res textures or texture arrays may take hundreds of MB if needed; maybe even 1GB+.

I read that Opengl can compress textures, but I’m unsure if it is done in the vram itself, is it ?

Ok, first: Yes, OpenGL supports compressed texture formats. These can save a lot of space.

And yes, OpenGL will compress texture data to those compressed texture formats. However, you typically wouldn’t use this latter feature, for several reasons (see below). What you’d do instead is compress your textures to a GPU compressed texture format off-line and then upload them to the GPU in that already-compressed format.

So why wouldn’t you let OpenGL compress the texture data for you? First, it’s sloooow. It’s a great way to add long freezes to your app when uploading texel data to the GPU. Second, it results in low-quality compressed textures. Why? Because (even despite the slowness) OpenGL’s compressing-texture-data-on-the-fly is optimized for speed, not for quality. So really you just want to precompress before runtime and then you get rid of both problems.

And no, the compression is not done in the VRAM itself. The GPU compressed texture data is ultimately stored in the VRAM though (after you upload it to the GPU and render with it).

If so, is a decompression required before displaying the texture ?

No, the GPU can sample directly from the compressed texture on-the-fly without decompressing more than the texels it needs. GPU compressed texture formats are optimized for random-access spot texture decompression on-the-fly.

And is there some compressions more standard than others that are available under any hardware ?

The older formats are more ubiquitous. So a lowest-common-denominator would be DXT1.

Define which GPU(s) your intending to support, and that will reveal whether you should consider targetting other formats in addition (or instead!).

This is amazing, Dark Photon, your whole answer contains exactly all informations I was needing, thank you very much !
So eventually, I’m indeed choosing the DXT1 compression format, with two version of textures, a 1024×1024 set with a check of the graphic card ability, and another 512×512 set with some frameset skip in order to fallback to a low resource-consuming when required.

Thaks GClements too for your smart idea of video compression, I didn’t thought about it. However I used Dark Photon advices first and it worked, so I stopped here the research. I’m sorry I won’t give feedback about video compression.

Have a nice day to everybody !