Some confusion when using wglChoosePixelFormatARB

Problem:
wglChoosePixelFormatARB supersedes ChoosePixelFormat.
Pixelformats are grouped into four:

1. Accelerated pixel formats that are displayable
2. Accelerated pixel formats that are displayable and which have extended attributes
3. Generic pixel formats
4. Accelerated pixel formats that are non displayable.

ChoosePixelFormat can only return pixelformats from group 1 and group 3, while wglChoosePixelFormatARB
can return from all the groups. Hence, wglChoosePixelFormatARB is needed when you want sRGB framebuffers
and multisampling.

wglChoosePixelFormatARB uses an int array to describe different attributes, while ChoosePixelFormat uses a
PIXELFORMATDESCRIPTOR structure. Since wglChoosePixelFormatARB’s attribute list can describe more
information, it is incompatible with PIXELFORMATDESCRIPTOR.

All this considered, what goes into the PIXELFORMATDESCRIPTOR argument when you call SetPixelFormat
with a pixelformat index from wglChoosePixelFormatARB? Do one set the PIXELFORMATDESCRIPTOR to zero,
or should DescribePixelFormat() be called on the returned pixelformat index?

NOTE: Both schemes seem to “work”, I just want to do the most correct thing, so my application won’t suddenly stop working in the future.


static void CreateGL3Context(HDC& hdc, HGLRC& hglrc)
{
    int pixfmt;
    unsigned int numpf;
    PIXELFORMATDESCRIPTOR pd;

    const int piAttribIList[] = {
        WGL_DRAW_TO_WINDOW_ARB,     TRUE,
        WGL_ACCELERATION_ARB,       WGL_FULL_ACCELERATION_ARB,
        WGL_SUPPORT_OPENGL_ARB,     TRUE,
        WGL_DOUBLE_BUFFER_ARB,      TRUE,
        WGL_PIXEL_TYPE_ARB,         WGL_TYPE_RGBA_ARB,
        WGL_COLOR_BITS_ARB,         24,
        WGL_DEPTH_BITS_ARB,         24,
        WGL_STENCIL_BITS_ARB,       8,
        0,0
    };

    const int cca_list[] = {
        WGL_CONTEXT_MAJOR_VERSION_ARB,  3,
	    WGL_CONTEXT_MINOR_VERSION_ARB,  3,
	    WGL_CONTEXT_FLAGS_ARB,          WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
	    WGL_CONTEXT_PROFILE_MASK_ARB,   WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
        0,0
    };

    wglChoosePixelFormatARB(hdc, piAttribIList, NULL, 1, &pixfmt, &numpf);

    /* Is this the best solution? */
    DescribePixelFormat(hdc, pixfmt, sizeof(PIXELFORMATDESCRIPTOR), &pd);

    SetPixelFormat(hdc,pixfmt,&pd); //<-- requires an "old" and inferior descriptor even if we're not using that anymore
    hglrc=wglCreateContextAttribsARB(hdc,NULL, cca_list);
    wglMakeCurrent(hdc,hglrc);
}

All this considered, what goes into the PIXELFORMATDESCRIPTOR argument when you call SetPixelFormat
with a pixelformat index from wglChoosePixelFormatARB?

… That is a very good question.

The SetPixelFormat documentation says this about what that PFD is for:

I don’t really know what the “system’s metafile component” is or how it affects OpenGL. In my code, I just create a simple PFD that looks like this:


		static	PIXELFORMATDESCRIPTOR pfd=				// pfd Tells Windows How We Want Things To Be
		{
			sizeof(PIXELFORMATDESCRIPTOR),				// Size Of This Pixel Format Descriptor
				1,											// Version Number
				PFD_DRAW_TO_WINDOW |						// Format Must Support Window
				PFD_SUPPORT_OPENGL |						// Format Must Support OpenGL
				PFD_DOUBLEBUFFER,							// Must Support Double Buffering
				PFD_TYPE_RGBA,								// Request An RGBA Format
				32,											// Select Our Color Depth
				0, 0, 0, 0, 0, 0,							// Color Bits Ignored
				8,											// 8-bit alpha
				0,											// Shift Bit Ignored
				0,											// No Accumulation Buffer
				0, 0, 0, 0,									// Accumulation Bits Ignored
				24,											// 24Bit Z-Buffer (Depth Buffer)  
				8,											// 8-bit Stencil Buffer
				0,											// No Auxiliary Buffer
				PFD_MAIN_PLANE,								// Main Drawing Layer
				0,											// Reserved
				0, 0, 0										// Layer Masks Ignored
		};

That works for me, at least.

I know what “works”. I even mentioned that. I’m after the correct way. For all I know, passing in a “close” descriptor or a zeroed descriptor might work simply by pure luck or side effects. I want to know which approach is considered correct, so my application doesn’t suddenly break in the future.

I’m after the correct way.

You are assuming that there is a “correct way”, that everyone else isn’t simply using “what works”. If there is really a “correct way”, I’d be interested to know what it is. And more importantly, how they know it is in fact correct, considering how poor the official documentation is on the subject.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.