Understanding Video Memory Usage???

I’ve got a question about memory usage on video cards. I see all these reviews of the lattest and greatest consumer video cards (usually Geforce2) that have 64mb of memory being compaired to the 32mb versions.

Why do the 64mb versions score higher frame rates at higher resolutions? Since the cards score basically the same at low resolution, it seems that the 32mb version isn’t running out of memory here. Then at higher resolutions the textures aren’t taking up any more room (like so many reviews have incorrectly stated). The only thing I can think of is that the larger frame buffer memory requirement at high resolutions are pushing the 32mb card over it’s limit. But I just don’t think that nearly every game on the market is so close to the 32mb limit that an extra meg or so needed for frame buffers starts slowing down the cards at higher resolutions.

I understand this isn’t probably the proper place to post this question, but I didn’t know where else a bunch of graphics guys who might know the answer are. Plus I hope other people here will benefit from an answer that will hopefully come.

I just started thinking about this the other day and it’s driving me crazy trying to come up with an answer so any ideas on the matter would be appricated.

Originally posted by ribblem:
But I just don’t think that nearly every game on the market is so close to the 32mb limit that an extra meg or so needed for frame buffers starts slowing down the cards at higher resolutions

If you sit down and work it out, it’s a lot more than “an extra meg”.

Take the highest of the current “standard” resolutions, 1600x1200. That’s 1920000 pixels. Call it 2 million for convenience. Now, assume 32-bit color in the framebuffer, so 4 bytes per pixel. That’s 8Mb. Remember we’re almost certainly double buffering. Another 8Mb for the back buffer. Depth and stencil buffers are usually interleaved, with 24-bit Z and 8-bit stencil, so 32 bits == 4 bytes in all. Another 8Mb. So adding it all up, we’ve already used 24Mb before we even BEGIN storing textures.
Compare that to 6Mb for 800*600 and I can very easily believe that the extra mem eliminates enough texture swapping to push the framerate up.

Ya in the extreme that’s true, but I was looking more at the 800x600 vs. 1028x764. Where it is just over a meg per buffer so I guess 2 megs(at 32 bit depth).

Plus when you compare 16 bit depth to 32 bit depth in these review the numbers for mem usage don’t seem to add up.

Maybe I’m all wrong and it is just the frame buffers, but I just don’t think so.

I make it 3.6 megs with backbuffer and Z/stencil, but even so. Was there really a big difference between 800x600 and 1024x768?

A small difference wouldn’t surprise me. With high-res textures, a lot of games are doing SOME texture swapping each frame even on a 32Mb card, and that would disappear on a 64Mb card. But I wouldn’t expect a huge leap.

(Unless they were benchmarking with FSAA enabled - that multiplies your buffer memsize by a factor, and would certainly put the squeeze on texture mem.)