Probleme with PixelFormat And OpenGL Extension

Hello,

I need to get a PixelFormat with a DepthBits = 32 and ColorBits = 32.

So when I use ChoosPixelFormat, this function returns me a PixelFormat number 1 with a DepthBits = 24. But If I do glMultiDrawArraysEXT = (PFNGLMULTIDRAWARRAYSEXTPROC)wglGetProcAddress(“glMultiDrawArraysEXT”);
this function is not NULL ( Here the OpenGLExtensions is supported )

Then I have used DescribePixelFormat to search the best PixelFormat that I need ( DepthBits = 32 ), i find the PixelFormat number 77. But my problem is that, the OpenGL Extensions aren’t not supported ( glMultiDrawArraysEXT = NULL ).

Is it possible to know the OpenGL extensions supported by each Pixel Format ?
Why the extensions aren’t not supported by all PixelFormat ?

Thanks a lot

Check the GL_VENDOR string of PixelFormat number 77, it is maybe a non-accelerated pixelformat.

BTW I don’t know any video card able to actually do 32 bit depth buffer. Only 16 and 24 exists to my knowledge.

Thanks ZbuffeR

Effectively, I’ve check the GL_VENDOR string and the GL_RENDERER string.

With the pixelformat number 1, these strings are :
GL_VENDOR = “NVIDIA Corporation”
GL_RENDERER = “GeForce 7900 GS/PCI/SSE2”
GL_RENDERER = “2.1.2”

But with the pixelformat number 77, these strings are differents :
GL_VENDOR = “Microsoft Corporation”
GL_RENDERER = “GDI Generic”
GL_VERSION = “1.1.0”

So with these informations we can determine which PixelFormat is accelerated.

Thanks for your help…

Just curious… as float color is available today (as 32-bit float’s I’d have to assume), shouldn’t 32-bit float depth buffer be both possible and reasonable too?

This makes even more sense in case Intel indeed manages to create something competitive (at the time of its release) with Larrabee (sp?), as it’d likely use SSE and therefore support IEEE-754 format float.

Wild guessing here, but probably some depth buffer optimizations will not work with 32 bits depth : depth compression, fast z, whatever…
So even if it is possible, it will probably become quite slower.

(again I slide OT, but this could be an interesting sidestep)
ZbuffeR, I think you touched an interesting point there!

The hacker in me immediately created the 15-bit fixed-point-float depth-buffer containing a larger area than one output pixel (e.g. 4x4, or 8x8 to match MPEG’s macroblocks :wink: ), using the highest bit to say “I’m useful” or “look up the real value in the framebuffer-sized depth-buffer”.

Then it’s merely the difference between how cache-friendly the cases where the “real” depth-buffer has to be used are (we should keep in mind that GPU memory busses are nowadays at least 128 bits, often larger):
4x4:

  • 3_byte_float*4_pixels (= 12 bytes = cache-unfriendly)
  • 4_byte_float*4_pixels (= 16 bytes = more cache friendly)

8x8:

  • 3_byte_float*8_pixels (= 24 bytes = still cache unfriendly)
  • 4_byte_float*8_pixels (= 32 bytes = even more cache friendly)

This assumes the GPU’s L2-caches aren’t optimized for 24-bit alignment (which they may be), but L1 caches are AFAIK always POW2.