View Full Version : z-buffer resolution in openGL, how many bits?

09-30-2009, 07:18 PM
Is there a fixed size that we all end up using? Is it adjustable?

09-30-2009, 07:42 PM
OpenGL does not define the depth of the z-buffer. That is usually handled by the windowing system or whatever else you use to creat the context. I would not expect that you could change the size of the buffer after the context has been created.

09-30-2009, 07:43 PM
Isn't there a certain number of bits? Does it all depend on the zFar and zNear values?

09-30-2009, 07:48 PM
There is a certain number of bits. Both the number of bits and the zFar and zNear values affect the resolution of the buffer. When the vertices are projected, they are projected into a cube 2-units on a side around the origin. Verticies near zNear will end up near z=-1 and vertices near zFar will end up near z=1. If zNear and zFar are very different, that will decrease the distance between points after they are projected. These values are stored in a buffer with x-bits, which will also affects the precision of the comparisons.

09-30-2009, 10:06 PM
Ever since GPUs have existed, the supported bit depths are :
16 bit depth
24 bit depth and 8 bit pad = total = 32
24 bit depth and 8 bit stencil = also called D24S8
32 bit depth (rarely supported)

If you are on Windows, check out function DescribePixelFormat

Does it all depend on the zFar and zNear values?

10-01-2009, 07:21 AM
Isn't there an openGL call to query and set the value?

Ilian Dinev
10-01-2009, 07:26 AM
As V-man pointed out, look at DescribePixelFormat.

static const PIXELFORMATDESCRIPTOR pfd= // pfd Tells Windows How We Want Things To Be
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW | // Format Must Support Window
PFD_SUPPORT_OPENGL | // Format Must Support OpenGL
PFD_DOUBLEBUFFER, // Must Support Double Buffering
PFD_TYPE_RGBA, // Request An RGBA Format
32, // Select Our Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
24, // ********* 24Bit Z-Buffer (Depth Buffer) *********** NOTE
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored

GLuint PixelFormat;




Ilian Dinev
10-01-2009, 07:31 AM
P.S. OpenGL always depends on an external windowing platform-dependent interface to initialize the context. In Win32, it's WGL, on linux it's GLX, on mobiles it's EGL.

10-01-2009, 11:37 AM
Regardless of the windowing system, you can always query the actual number of bits you got, with

glGetIntegerv(GL_DEPTH_BITS, &actualbits);

10-16-2009, 02:15 PM
Tonight I tried to explain to a friend of mine how 3D pipeline works. I hope I succeeded in that, at least as much as I could during the car-drive... :)

But at the end, he asked me: Why Z-buffer isn't 32-bit or even 64-bit? Hmmm... Believe it or not, after a five years of teaching computer graphics I didn't know what to say. Only a 24-bit Z-buffers are available even for the very powerful 3D accelerators. :(

Is there any justification for that?

Ilian Dinev
10-16-2009, 03:36 PM
1) stencils getting immensely popular at a very critical time and being 8 bits, thanks to reflections/shadow-volumes.
2) all existing codebase depends on z being [0;1]
3) floats don't get compressed as nicely (or supported at all) by HiZ/Zcull/EarlyZ yet.

10-16-2009, 05:58 PM
I do think 3) is the main killer of any higher-than-24-bits-zbuffer. Even integer 32 bits would be harder to optimize.

I guess it will come, but rather later than sooner.

10-18-2009, 04:49 AM
Thank you, guys! [thumbs-up]
I admire your zeal in answering to so many posts.
I feel like getting tired after only a few months. :)