View Full Version : How do you enable a w-buffer?

04-14-2002, 09:11 PM
Do you have to use an extension of some sort? In Direct3D it's really simple: m_Device->SetRenderState( D3DRS_ZENABLE, D3DZB_USEW ). I'm sure it's more complicated with OpenGL, but how do you do it? The Red Book does not even allude to the existence of w-buffers.

04-14-2002, 11:02 PM
AFAIK OpenGL really only deals with a "depth buffer". It does not specifically say that it's Z or W (only that "very far away" = 1.0, and "very close" = 0.0). I have found no way to controlling the depth buffer format under OpenGL. Perhaps some drivers allow changing this on a global basis (in display settings under Windows, for instance).

04-15-2002, 04:11 PM
Well, there's no option for me (with a GeForce2 Go) to select whether OpenGL uses a z-buffer or w-buffer. But I'm sure OpenGL must support this somehow, right?

04-15-2002, 04:45 PM
There is probably an OpenGL extension for w-buffer support, but none that I'm aware of.

You could implement your own with the depth replace texture shaders on GeForce3 and GeForce4 Ti.

Thanks -

04-15-2002, 05:55 PM
I have a GeForce2 which doesn't support pixel shaders, but anyway I'd want to support w-buffers on any card that allows it, not just cards that have pixel shading.

04-15-2002, 06:19 PM
>>I have a GeForce2 which doesn't support pixel shaders, but anyway I'd want to support w-buffers on any card that allows it, not just cards that have pixel shading.<<

i dont believe all cards support it (even with d3d) so it looks like youre stuff with the zbuffer
which imho is a lot better than a wbuffer.
if u want more precision at far ranges of the zbuffer theres a few things u can do.
search this group for examples

04-15-2002, 06:43 PM
I wasn't planning on requiring w-buffers, but I should allow it for all cards that support w-buffers without necessarily supporting pixel shading, was my point.

Anyway I read a tutorial online and discovered that w-buffer are inaccurate at close range, which is definitely Not Good. So I'll just stick with a z-buffer, I guess, and see how good/bad the w-buffer is in D3D.

04-15-2002, 11:17 PM
It is my understanding that the precision of the W buffer is equal regardless of distance, while the precision of the Z buffer is better at close distances and worse at far distances. In general, you want the Z buffer behaviour (closer is more important).

Cass, when I was writing a Glide -> OpenGL wrapper, I was searching high and low for an OpenGL extension that allowed selection of depth buffer format (in Glide you can select W or Z), but I didn't find any.

[This message has been edited by marcus256 (edited 04-16-2002).]

04-16-2002, 10:26 AM
You're right, Marcus; that's what I meant. The W-buffer is more inaccurate at close distances than the z-buffer. The inaccuracy is even throughout, since it's just the floating-point precision that limits it, but close-up it might be very noticeable. For the terrain it won't be noticeable, but for other objects it probably will.

04-16-2002, 11:21 AM

Careful. W varies linearly in eye space, so if you stored it in fixed point, the accuracy is independent of the depths being resolved. If you store W in floating point, you *do* change the way precision is distributed. Floating point naturally packs more precision toward zero. That's why some people advocate having a floating point z buffer with a glDepthRange(1,0). The idea would be to try to balance the uneven distribution of the z buffer with the uneven precision distribution of floating point.

Thanks -

04-16-2002, 12:28 PM
The relationship would seem arbitrary if you do this depending on where near and far clip planes landed.....but I suppose it's already arbitrary.

How about a floating point W buffer. You do still want and need more precision towards the viewer. FP W buffer would give a more consistent and pleasing depth precision distribution over a broader (and more typical) range of near and far clip values without the nasty side effects when you have the near clip too close to the eye, which is the real reason most people have precision problems with Z buffer.

[This message has been edited by dorbie (edited 04-16-2002).]

04-16-2002, 02:30 PM
The w-buffer is *always* floating-point, which is why I was thinking of using it. W = 1/Z, so if W was an integer, it would always be zero.

04-16-2002, 08:18 PM

There's nothing that says that w-buffering must be done in floating point. That may be the way it's done in Direct3D, but that doesn't mean it can't be done in fixed point.


04-16-2002, 09:58 PM
Actually you're right, I was thinking of taking the inverse of an integer... but just as the floating-point Z coordinates are scaled to (0..65535) or (0..16777216), so too could the W-coordinates be converted to fixed point.

04-17-2002, 12:48 AM
There's a paper about the various z-buffer and w-buffer precision issues (and the glDepthRange-hack cass talks about), it's called "Optimal Depth Buffer for Low-Cost Graphics Hardware" (look for this on Google). Might give some more insight...


04-17-2002, 08:24 PM
The issue is how W is stored in the framebuffer and used for comparrison, not simply what comes off the transformed vertex.

As Cass says the stored representation is everything and a float representation isn't a given. That takes floating point fragment interpolators. It is wrong to describe a w buffer as linear if it is stored as a float.

My apologies to Cass, I never read your post fully when I wrote what I did. A fp W seems intuitively to have some inherently attractive properties, although perhaps a scaled fp 'linear' value between a zero near value and any programmable far value before you store to limited precision would permit almost complete control of precision throughout the 'linear' range.

It could be a lot of extra work though.

[This message has been edited by dorbie (edited 04-17-2002).]

04-18-2002, 10:36 AM
I have to ask, why are there no 32-bit z-buffers? I thought it was because of the video memory such a buffer would take up, but since the newer graphics cards offer plenty of video memory, surely it would at least be an option now if that was the only problem.

Do the cards use some sort of optimization for comparison that depends on the size of each entry in the z-buffer? I would have thought a 32-bit comparison would be faster than a 24-bit one, since it's a dword.

04-18-2002, 10:41 AM
On nVidia cards, 24-bit Z-Buffer also gives 8 bits of stencil, so it becomes 32-bit aligned.


04-18-2002, 11:23 AM
what about that extension ? http://oss.sgi.com/projects/ogl-sample/registry/EXT/wgl_depth_float.txt

According to delphi3D, no card supports it http://www.opengl.org/discussion_boards/ubb/frown.gif

04-18-2002, 04:22 PM
CGameProgrammer, are you aware of any video card on the market that has a core design that _actually_ started shipping less than ~3 years ago?

All we have to do is wait and good things will come to us.

My comments about scaling would not apply to a 32 bit float (but would to 32 bit int and maybe fixed point) since any messing around with the value would be bad for precision unless you get better than 32 bit from the vertex transformation. So you'd probably want to interpolate right off the coordinate and store it. A few extra bits in the evaluation of fragment depth becomes desirable. You end up deep in fp arithmetic precision issues which then have a greater effect on fragment z than the representation you choose to store. One principal should probably be modify the data as little as possible after the transform, but you have to consider the fragment evaluation arithmetic precision too.

One thing seems likely, fp 32 bit depth buffers will have limitations elsewhere and those limitations will be virtually impossible to know, without detailed knowledge of the hardware and more smarts than I have.

04-18-2002, 04:51 PM
Hmm.... another thought, you probably do want to scale the float, because of the range you aren't going to use all the bits in the exponent, that would buy you extra precision for the fragment evaluation vs what you store.

I wonder if a simple shift of the exponent bits would do it?

[This message has been edited by dorbie (edited 04-18-2002).]

04-18-2002, 07:30 PM
I can't remember the floating-point format, other than that there's a sign bit, exponent bits, and "the rest." But it seems to me that if hardware vendors decided to internally use a 0..1 floating-point buffer, they could use that sign bit for something else, and shorten the exponent part of the variable, using the extra room for the data, the number itself. Right?

04-18-2002, 08:12 PM
Well, yes. But getting rid of the exponent and the sign bit so that numbers in the range 0..1 are the only numbers represented leaves you with just the mantissa. So you have a 32 bit value with the minimum being 0 and the maximum being 1. Which is fixed-point, and is pretty much what we have right now, if not exactly.


04-18-2002, 09:53 PM
Yep, this is getting nowhere fast :-)

The whole point of a floating point as you point out is the exponent.

I can see that you might want to lose a few bits of exponent but goodness, not it all! This is effectively what I suggested with exponent bit shift. The LSBs would be zero but would give better fragment evaluation accuracy due to the larger exponent you start with.

You probably already have IEEE 32 bit floating point from the transformed vertices, so all those old inventive schemes don't buy you anything. The only reason wierd schemes work now is that depth buffers have less precision than the transformed vertex values whether it's eye Z or W you're talking about. When you go from vertex to depth value you started off with more precision, when store as floating point then there is nothing to be gained through manipulation other than trying to preserve as much of the availble precision during fragment evaluation.

04-19-2002, 01:24 PM
Well I never suggested getting rid of the exponent, just shortening it.

Also, dorbie, I was thinking of small fp variables (16 or 24 bits), not 32 bits. I doubt any modifications to the floating-point format will be needed for a 32-bit fp depth buffer, though at that bit depth, fixed point is probably fine.

[This message has been edited by CGameProgrammer (edited 04-19-2002).]

04-19-2002, 03:24 PM
Point taken, I wasn't trying to be critical of you.

BTW, on the same train of thought perhaps you want to simply move the exponent/mantissa boundary in the fp representation so that the zero MSB of the exponent becomes the zero LSB of the mantissa for fragment evaluation.

[This message has been edited by dorbie (edited 04-19-2002).]

04-23-2002, 11:17 PM
Originally posted by dorbie:
BTW, on the same train of thought perhaps you want to simply move the exponent/mantissa boundary in the fp representation so that the zero MSB of the exponent becomes the zero LSB of the mantissa for fragment evaluation.

Well, this is a no-cost harware operation, and really is only the same as removing the zero MSB (I suppose you're refering to the zero sign bit?), so e.g. you would only use 31 bits out of the 32 bits on a 32 fp representation, since you know the MSB is always zero.

I'm not sure if a 16-bit floating point format would be useful, but a 24-bit floating point format should be able to perform very well:

5 bits exponent + 19 (+1) bits mantissa

Remember, the MSB of the mantissa is always 1, and need not be stored, so you get 20 effective bits of mantissa, and a range equal to a 32-bit fixed point format (the exponent can shift the MSB of the mantissa 31 bits). Numbers in the range:


down to


could be represented with this format.