[QUOTE=Catelite;1288419]…cube, whose 6 faces would be used to display 6 different hi-res animated textures. (1024×1024, or better : 2048×2048 if possible…)
With my naive analysis, I evaluate the needs to 6 textures multiplied by the number of animation frames (let’s say, about 40 for each texture) in the vram, with 4 bytes per pixel, so that means 6×1024×1024×40×4 = 960MB. Am I wrong ?[/QUOTE]
No, given your assumptions, you’re correct.
However, consider: for a 3D cube map view, you’re probably not using alpha, so 25% of that is wasted. Also, even that can be reduced greatly through the use of GPU compressed texture formats. At the limit, if you use DXT1 you can get an average space consumption of 1/2 byte per texel. So that takes your estimate down to: 102410240.5640 = 120MB. There are other, newer compressed texture formats as well you could consider.
But using GPU compressed texture formats probably only makes sense if your cube maps are pregenerated. Are they? I’m assuming so because otherwise why would you be concerned about the total space consumption on the GPU of a series of frames.
Also, keep in mind that depending on the required bandwidth, you can stream your data to the GPU dynamically from CPU memory and potentially from disk, avoiding the need to house all of the texture data on the GPU at once (if that’s otherwise impossible).
Can somebody tell me what is the usual number of textures, and what are their sizes, stored in vram in standards games ?
It’s totally a function of what your application needs, and what types of textures you use. Single high-res textures or texture arrays may take hundreds of MB if needed; maybe even 1GB+.
I read that Opengl can compress textures, but I’m unsure if it is done in the vram itself, is it ?
Ok, first: Yes, OpenGL supports compressed texture formats. These can save a lot of space.
And yes, OpenGL will compress texture data to those compressed texture formats. However, you typically wouldn’t use this latter feature, for several reasons (see below). What you’d do instead is compress your textures to a GPU compressed texture format off-line and then upload them to the GPU in that already-compressed format.
So why wouldn’t you let OpenGL compress the texture data for you? First, it’s sloooow. It’s a great way to add long freezes to your app when uploading texel data to the GPU. Second, it results in low-quality compressed textures. Why? Because (even despite the slowness) OpenGL’s compressing-texture-data-on-the-fly is optimized for speed, not for quality. So really you just want to precompress before runtime and then you get rid of both problems.
And no, the compression is not done in the VRAM itself. The GPU compressed texture data is ultimately stored in the VRAM though (after you upload it to the GPU and render with it).
If so, is a decompression required before displaying the texture ?
No, the GPU can sample directly from the compressed texture on-the-fly without decompressing more than the texels it needs. GPU compressed texture formats are optimized for random-access spot texture decompression on-the-fly.
And is there some compressions more standard than others that are available under any hardware ?
The older formats are more ubiquitous. So a lowest-common-denominator would be DXT1.
Define which GPU(s) your intending to support, and that will reveal whether you should consider targetting other formats in addition (or instead!).