z-buffer resolution in openGL, how many bits?

Is there a fixed size that we all end up using? Is it adjustable?

OpenGL does not define the depth of the z-buffer. That is usually handled by the windowing system or whatever else you use to creat the context. I would not expect that you could change the size of the buffer after the context has been created.

Isn’t there a certain number of bits? Does it all depend on the zFar and zNear values?

There is a certain number of bits. Both the number of bits and the zFar and zNear values affect the resolution of the buffer. When the vertices are projected, they are projected into a cube 2-units on a side around the origin. Verticies near zNear will end up near z=-1 and vertices near zFar will end up near z=1. If zNear and zFar are very different, that will decrease the distance between points after they are projected. These values are stored in a buffer with x-bits, which will also affects the precision of the comparisons.

Ever since GPUs have existed, the supported bit depths are :
16 bit depth
24 bit depth and 8 bit pad = total = 32
24 bit depth and 8 bit stencil = also called D24S8
32 bit depth (rarely supported)

If you are on Windows, check out function DescribePixelFormat

Does it all depend on the zFar and zNear values?

no

Isn’t there an openGL call to query and set the value?

As V-man pointed out, look at DescribePixelFormat.


static const PIXELFORMATDESCRIPTOR pfd=                // pfd Tells Windows How We Want Things To Be
    {
        sizeof(PIXELFORMATDESCRIPTOR),                // Size Of This Pixel Format Descriptor
        1,                                            // Version Number
        PFD_DRAW_TO_WINDOW |                        // Format Must Support Window
        PFD_SUPPORT_OPENGL |                        // Format Must Support OpenGL
        PFD_DOUBLEBUFFER,                            // Must Support Double Buffering
        PFD_TYPE_RGBA,                                // Request An RGBA Format
        32,                                        // Select Our Color Depth
        0, 0, 0, 0, 0, 0,                            // Color Bits Ignored
        0,                                            // No Alpha Buffer
        0,                                            // Shift Bit Ignored
        0,                                            // No Accumulation Buffer
        0, 0, 0, 0,                                    // Accumulation Bits Ignored
        24,              // *********  24Bit Z-Buffer (Depth Buffer)  *********** NOTE
        0,                                            // No Stencil Buffer
        0,                                            // No Auxiliary Buffer
        PFD_MAIN_PLANE,                                // Main Drawing Layer
        0,                                            // Reserved
        0, 0, 0                                        // Layer Masks Ignored
    };
    hDC=GetDC(hMain);



    



    GLuint        PixelFormat;
    PixelFormat=ChoosePixelFormat(hDC,&pfd);
    
    SetPixelFormat(hDC,PixelFormat,&pfd);
    


    hRC=wglCreateContext(hDC);



    wglMakeCurrent(hDC,hRC);

P.S. OpenGL always depends on an external windowing platform-dependent interface to initialize the context. In Win32, it’s WGL, on linux it’s GLX, on mobiles it’s EGL.

Regardless of the windowing system, you can always query the actual number of bits you got, with

glGetIntegerv(GL_DEPTH_BITS, &actualbits);

Tonight I tried to explain to a friend of mine how 3D pipeline works. I hope I succeeded in that, at least as much as I could during the car-drive… :slight_smile:

But at the end, he asked me: Why Z-buffer isn’t 32-bit or even 64-bit? Hmmm… Believe it or not, after a five years of teaching computer graphics I didn’t know what to say. Only a 24-bit Z-buffers are available even for the very powerful 3D accelerators. :frowning:

Is there any justification for that?

Speculation:

  1. stencils getting immensely popular at a very critical time and being 8 bits, thanks to reflections/shadow-volumes.
  2. all existing codebase depends on z being [0;1]
  3. floats don’t get compressed as nicely (or supported at all) by HiZ/Zcull/EarlyZ yet.

I do think 3) is the main killer of any higher-than-24-bits-zbuffer. Even integer 32 bits would be harder to optimize.

I guess it will come, but rather later than sooner.

Thank you, guys! [thumbs-up]
I admire your zeal in answering to so many posts.
I feel like getting tired after only a few months. :slight_smile: