Pixelformat enumeration

How can I enumerate pixelformats that have different color depths than the current colordepth for the DC I’m currently working with? Basically, I’m trying to mimic the functionality offered by D3D in the methods EnumAdapterModes, CheckDeviceFormat and CheckDepthStencilMatch.

I need to be able to enumerate all the color depths and matching depth/stencil depths for an adapter. To do this, I currently need to loop and change the display settings of the device to the color depths I want to query (16, 24 and 32 bit), create a temporary dummy window, get the DC and setup the rendering context and use wglGetPixelFormatAttribivARB method.

However, the step to change the display mode is quite annoying, since it makes the monitors for the user go all changy

Just enumerating for the current color depth and assuming the same depth/stencil combinations are available for other color depths does not work, since they’re typically not.

Any suggestions?

/ Mattias

Hmm, it used to be that you could enumerate all the pixel format combinations available by the OpenGL driver by looping through the GDI-compatible DescribePixelFormat API (calls into wglDescribePixelFormat) without switching modes.

You should be able to use the desktop window rather than creating a dummy window for obtaining a GDI DC.

Also, are you sure it is really necessary to create a GL rendering context? Unless you will execute an GL command, most WGL functions only need the GDI DC to index an internal list like a handle.

Finally, I recall that some CAD apps would cache the name of the active OGL ICD driver along with available pixel format info so as to perform a lengthy enumeration such as this only once unless the installed graphics board or driver is changed.

I did tests where I looped through all the pixelformats and used DescribePixelFormat, but they all describe formats that has the same color depth as the current display mode of the adapter.

I need to be able to enumerate the pixelformats available for the adapter in different color depths, since the depth/stencil buffer combinations are typically not the same for 16 and 32 bit color depth on my card (and I guess some other cards as well).

I needed a context to be able to get proc address for the wgl extension pixelformat functions, if I didn’t create a context I just got null pointers.

The different depth & stencil buffer sizes per color depth is not too surprising as this has been showing up on D3D drvers.

The specific pixel format enumerations per color depth is likely a legacy of the original OpenGL-GDI design. Early OGL accelerators on NT4 the user would have to reboot to make any OEM-specific control panel changes to be effective.

So dynamically switching the display mode is less expensive by comparison. You might want to consider the pixel format caching idea and shift the mode switching to a separate setup step by your app.

I am surprised to here that a rendering context is necessary for that using that WGL extension. In “normal” driver design, the GL RC is only necessary as destination for GL commands while the GDI DC is used to as more general data structure indes, similar to Wn32 window handle.

Even in that case, you should be able to create a GL RC for the desktop window DC for this extension operation. You just wouldn’t want to actually draw into that particular RC. This was a common trick to read GL string info without creating a window that would never get rendered to.

Originally posted by IronicResearch:
[b]
I am surprised to here that a rendering context is necessary for that using that WGL extension. In “normal” driver design, the GL RC is only necessary as destination for GL commands while the GDI DC is used to as more general data structure indes, similar to Wn32 window handle.

Even in that case, you should be able to create a GL RC for the desktop window DC for this extension operation. You just wouldn’t want to actually draw into that particular RC. This was a common trick to read GL string info without creating a window that would never get rendered to.[/b]
The reason to create a rendering context is that you need to call wglGetProcAddress to get the address of wgl arb procs, but wglGetProcAddress is pixelformat dependent(you won’t have the same procs available for a software pixelformat than for a hardware pixelformat):

The wglGetProcAddress function returns the address of an OpenGL extension function for use with the current OpenGL rendering context.

For the same reason, as you need to set a pixelformat & create a context, you cannot just use the desktop’s HDC because (assuming you can set the pixelformat for the desktop’s HDC), you won’t be able to change it again (an HDC can only have one pixelformat in its whole life).

Actually these wgl arb extensions are kind of a conundrum because you need an OpenGL context to get the wgl arb available pixelformats, but you actually want the pixelformats in order to be able to create the context.

Ideally these extensions should be supported by opengl32.dll rather than by the ICD (by querying all installed ICDs), but that would imply a redesign of that DLL.

BTW, on the original question:

  1. Should WGLChoosePixelFormatARB consider pixel formats at other display depths?
    It would be useful to have an argument to WGLChoosePixelFormatARB indicating what display depth should be used.
    However, there is no good way to implement this in the ICD since pixel format handles are sequential indices and the pixel format for index n differs depending on the display mode.

From WGL_ARB_pixelformat

Thanks EvanGLizer for referring to the extension spec which I obviously hadn’t done…

Sounds like the situation in enumerating available visual format info is still a relatively expensive step that the app would have to be smart enough to do as least often as possible.

Otherwise the responsibility of getting the graphics subsystem into a compatible display mode would be left to the user, like in the SGI workstation days.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.