how much video memory is used by framebuffer?

2 part question:

how much video memory is used by the screen without any OGL or DX app running? obviously, this is at least widthheight[24|32]bits, but is that all? it’s not like the normal windows/linux/OSX screen needs depth, alpha or stencil information.

its corollary, how much is used by an OGL/DX framebuffer? widthheight(color+depth+stencil+aux)?
does the front buffer require multiplication by 2? same for stereo?

this seems like an easy question. does anyone out there with hardware expertise have an answer? ANYONE?

thanks!
:slight_smile:

I had this problem once… it is crazy…
the framebuffer is not available under the part of the window that is obscured by another window, it can grow and shrink with the window…it seems like the operating system is allmighty

I have read some quadro fx (or something like that) documentaion that stated those cards have hardware acceleration for 8 cliping regions (like menus or floating toolbars), while a normal card only has acceleration for 2 cliping regions.

My guess is that you can not even tell that the framebuffer is continuous…so your question may remain unanswered for a long time…

i am not concerned about memory continuity or even rendering, but rather just a simple statement of how much video memory is used (in a gross sense) by a regular desktop screen and an OpenGL/DX window.

for example, does a 1600x1200 24 bit display only use 5 1/2 MB video memory on any card? or is this memory separate and only used when HW rendering is used (via OGL/DX window)?

likewise, how much video memory does a 500x500 RGBA+stencil+depth window use? is this deterministic, or do opaque driver choices prevent knowing the details?

essentially, i want to know, within 5-10%, how much video memory out of the total on the card (e.g. 128MB) is avaliable in a very general way for any program (after allowing for screen/framebuffer).

for example, on a 128MB card, you could never use all 128MB for your program, but maybe 120MB is a reasonably close approximation of how much video memory is arbitrarily available?

There is no way to be sure. It is all completely implementation dependent.

However, as a general rule, you know the bare minimum that is taken up by the frame buffer. A 1024x768x32-bit framebuffer takes 3MB * 3 (front, back, and z-buffer), or 9MB of room. So the framebuffer will take up no less than 9MB of room, possibly more. That’s the best you can do.

thanks. i suspected as much, and am comfortable making a reasonable approximation.

BUT can anyone tell me if the regular desktop screen takes video memory on the card (when no OGL/DX app is running)? i assume it does, but you never know… so right out of the box, a good chunk of video memory is gone depending on the display mode…

He just said it does. The pixels you look at on the screen ARE in memory on the video card. This would be the bare minimum and he gave you some numbers, however more than this is used. Just 2D windows operations in the GDI use additional memory on the card for fast 2D stuff, icons, fonts and other buffers can exist on the graphics card (AFAIK).

I suspect on some cards with a windows desktop the backbuffer & z stencil etc are not always reserved, but I could be wrong.

The absolute bare minimum with no 3D is going to be what you’re looking at as resolution times depth which is probably 4 MB or more, but that will almost certainly be an underestimate, however the visible buffer will get used for 3D rendering too on most windows systems with a copy on swap, and the whole thing probably gets completely reused in say full screen mode.

So the answer is yes, and probably more than you might assume counting pixels, and it’s still very driver dependent and there’s probably no way of really knowing for sure.

9 MB ? I really think there is no way to know for sure. What about antialiasing ? Or all these buffers compression techniques, that IHVs love so much ?

Y.

AA takes more. I’m pretty sure there are implementations that don’t preallocate the screens worth of backbuffer, z, stencil (and AA) etc. Yes, there’s no way of knowing for sure.

9 MB ? I really think there is no way to know for sure. What about antialiasing ? Or all these buffers compression techniques, that IHVs love so much ?

Like I said, it is a minimum number, not a maximum.

Buffer compression is for performance and bandwidth, not memory footprint.

Here’s a formula I use, which seems to work well on mainstream hardware:

EffectiveWidth = (ScreenWidth+31)&-32
EffectiveHeight = (ScreenHeight+31)&-32
BppRgb = (RgbBits > 16 ? 4 : 2)
BppZS = ((Stencil | | (DepthBits > 16)) ? 4 : 2)
NumFB = (DoubleBuffered ? 2 : 1)

TotalMemory = EffectiveWidth * EffectiveHeight * ((BppRgb * NumFB) + BppZS)

This takes into account the kinds of rounding, padding, and alignment that’s common on current hardware. Also, the Screen measurements may be for the physical screen, even if your window is smaller, depending on the backbuffer implementation!

2 1600x1200 screens on one desktop, with RGBA 32-bit and 24/8 depth/stencil, would according to this formula take > 40 MB of framebuffer space. Thus, those multi-monitor 32 MB cards aren’t really up to the task they claim to be :slight_smile:

Intel (and a few others? Kyro) make use of tile rendering, which means there is no real z & stencil.

So the minimum you try to calculate would be wrong on those.

There are also implementation that may decide to do tripple buffering.

Video memory usage: one of the greatest mysteries in life.

thanks for all the really helpful feedback.

one last question:
does creating a rendering context allocate any more video memory, or does the context just use the portion of screen memory wherever it is on the screen? that is, the screen is already using 32bits color + 24 bits depth + 8 bits stencil, for example, and the new context just uses a portion of that (this is what i believe); OR, does the new context allocate additional resources that are somehow mapped back into the screen (i doubt it)?

therefore, does the video mode you’re in totally dictate the type(s) of framebuffers you can create (i think it does)?

therefore, if an app needs 8-bit stencil, for example, and the current video mode does not support it, there is no way to programmatically “sacrifice” depth bits, for example, in exchange for more stencil bits, without changing the video mode, correct?

thanks!!!11

Originally posted by codemonkey76:
[…]
one last question:

one ? i count 3 of them! (nerver mind, i just can’t resist )

the aswers are:

  1. it depends on the implementation of the driver.
  2. yes, thats true (in most cases, usually you can’t get a 32bit pixelformat on a 16bit desktop. but it is possible to get a 16bit pixelformat on a 32bit desktop. it is possible - this does not mean you can depend on it…)
  3. yes. and if you want to change the pixelmode(or resolution) on the fly, you need to destory the opengl window and reacreate it again in order to get a new DeviceContext(=pixelformat).

Try this: http://www.area3d.net/file.php?filename=nitrogl/VideoMemory.zip

The “total” memory is how much memory after Windows is done with whatever for the desktop.

In my case it says I have a total of 122.305MB, which is from 128MB, so I really have 5.695MB used (1280x1024x32bit).

Originally posted by V-man:
Intel (and a few others? Kyro) make use of tile rendering, which means there is no real z & stencil.

In reality, even with tile rendering, a “real z & stencil” are usually still required in case the app decides to read back color or depth data.

So the minimum you try to calculate would be wrong on those.

In scene capture devices (i.e. tile based ), typically more memory (i.e. textures, vertex arrays) is committed for the duration of the scene (to optimize for on-chip cache locality). Thus, if you use 192MBs of textures/vbos in a frame with only 128MBs of gfx memory, the scene must be flushed to HW and performance suffers. Something to keep in mind to optimize for tile-based renderers. For immediate renderers, it is similar to the AGP bandwidth usage budget in a scene.

There are also implementation that may decide to do tripple buffering.

Yep

Video memory usage: one of the greatest mysteries in life.

Agreed. Don’t get too close to the edge. You just might fall over!

Originally posted by codemonkey76:
how much video memory is used by the screen without any OGL or DX app running?

The only practical way to tell is to test. Assuming you want to know this info for planning future allocations for an OGL app, then I’d try test allocating until I hit the wall and then free the test mem.

Unfortunately, most methods of getting videomem don’t hit a wall, but go to other pools next. So it’s not all that simple. Is glAreTexturesResident() good enough? Never tried it, but that’s the way I’d start.

Avi