My own GL buffers

This I think could be useful to some people, specially those seeking high quality graphics.

How about telling GL what we want our buffers to be like. I’m thinking about the z-buffer specifically where 16 or 24 or 32 bit may not be enough. If there was function with which we can ask GL to make a 64 bit or even higher z-buffer.

It can be done in software and we shouldn’t care about performance.

The same can be done for the stencil.

V-man

This is an implenetation/platform dependant issue. You don’t ask OpenGL for a specific pixel format, you ask the platform for one, which OpenGL then uses.

On Windows for example, you use Win32 API function to create a pixel format (this is where depthbuffer is specified), then you create a context using wgl functions, platform dependant functions for Windows only. When this is done, you start using OpenGL.

Also, think before you say that you want a 64-bit depth buffer, or even a 32-bit depth buffer. When vertices are specified as floats (23 bits of mantissa), and vertex computations are performed on floats, more than 24 bits of depth precision is virtually useless. Several graphics products that have claimed to have 32-bit depth buffers actually can’t use their 32 bits to any advantage, due to this limitation.

  • Matt

I’ll try to reply to both here.

Yes, I realize that GL avoids doing the pixel format business and leaves that to the OS, but I’m sure it could be made possible to have GL create it’s own buffers and ignore the z-buffer that is probably present in video memory (maybe that can be deallocated).

I did not think about the float precision here, but even with 24 or 32 bit precision, there could be z-fighting. How about computing the z-buffer related numbers in 64 bit or 80 bit on the cpu (x86) or even higher on other special hardware. Not much point in having it done on the GPU anyways.

Any GPU chip makers thinking about doing the float computation in 64 bit in the future?

V-man