Upgrading code from 1.1 to 1.5 or higher

Our company had been using OpenGL 1.1 for a really long time to achieve some level of compatibility across video cards. Unfortunately ATI/AMD cards had severe display issues relating to Mipmaps. Additionally, we couldn’t do anything even basically interesting such as antialiasing. Recently I took over the code and tied in GLEW so that I could start bringing our code up to date. Immediately I found that the ATI/AMD video card issues disappeared. I have also been adding antialiasing and plan to revamp a lot of code. However, I have found that after adding in GLEW all Mobile Intel 4 Series Graphics Chipset Family cards can no longer correctly perform depth buffering. I am still assuming it is something in my code. One thing I have noticed is that I cannot successfully find a good pixel format with MSAA for these cards. Is it likely that the pixel format would cause the depth buffer to go haywire. If so is there a base pixel format that all implementations should follow at a minimum to preserve depth buffering? Note: I have been searching the web for answers for the past two weeks but will admit that I might be searching on the wrong keywords. Thanks.

Your depth buffer bits are part of your pixel format, so the basic answer is “yes”. Depending on how you’re creating your context (I assume it’s via the standard Windows API calls) you can check the returned pixel format (via DescribePixelFormat) to ensure that you’re getting a good depth buffer.

One thing you need to be aware of with Intel chips is the golden rule: “if you can’t do it in Direct3D then don’t even attempt to do it in OpenGL”. Being aware of the capabilities and restrictions of Direct3D is important knowledge for programming OpenGL to run on these parts. In your specific case there is a restriction in D3D relating to multisampled render targets (and the backbuffer is treated as just another regular render target in D3D), which is that the color buffer and the depth buffer must have the same multisample settings. I suspect that’s at the root of your issue here.

Thanks for you assistance.

In all my attempts I hadn’t tried manually setting the pixel format. When I was doing the test for the MSAA acceptable format I wasn’t finding any so I was setting the pixel format to 1. I just found that depth buffer works properly with a pixel format of 4 for the Intel cards. Now I just need to figure out whether or not I can use that as a general default or if I should do something more elegant.

No, you cannot use ID without testing the corresponding pixel format property. I really don’t understand why don’t you set desirable parameters for the pixel format descriptor and query adequate pixel format. The system should return the most suitable one. When setting the descriptor, try to use meaningful values. For example, don’t try to set cDeptBits to 32, since no one card I have ever seen supports that. Hey, maybe that is the problem! I remember that it was not possible to retrieve any multisampling format if the depth is set to 32 bit.

To expand: ChoosePixelFormat is the established way to get a matching pixelformat on Windows. Feed it a pointer to a PIXELFORMATDESCRIPTOR struct (which you will have already filled out with the properties you want) and it will give you a value that you can then feed into SetPixelFormat.

An alternative way is to loop from 1 to infinity, feed the number into DescribePixelFormat (it will return 0 if you’ve run out of pixel formats, so that’s your break condition - note that the loop starts at 1 as pixel format numbers are 1-based), and if the described pixel format matches what you want then you’re off. You can save out the results of this enumeration if you want to give the user the ability to change the pixel format themselves (which will involve needing to destroy and recreate your OpenGL context too).

There’s normally no need whatsoever to use that alternative method; ChoosePixelFormat does a pretty good job on it’s own, and is the standard way of doing it. It says somewhere on MSDN that ChoosePixelFormat is also guaranteed to always give you a format with a depth buffer, but I can’t find the link right now (nor have I tested this, nor would I be too inclined to rely on it).

The only reason I’ve seen where you might want to use the alternative method is if your driver behaves oddly with certain options. E.g., I’ve seen one Intel driver always returning a 16-bit depth buffer when some legacy code asked for 32-bit, and even when the driver had support for 24-bit. Even then, just switching cDepthBits to 24 resolved it in a cleaner manner.

The MSDN page for SetPixelFormat has some sample code for the whole process (I suspect that the legacy code I mentioned was copy/pasted from there - even the comments were the same): http://msdn.microsoft.com/en-us/library/dd369049%28v=vs.85%29.aspx

One final note on this. If you get a 24-bit depth buffer, you may also get stencil, even if you didn’t ask for it. You should check if this is the case (DescribePixelFormat again), and - if so - ensure that you always clear stencil at the same time as your clear depth. The driver can take a faster path for it’s clear if you do it this way.

Technically, the established way is to use wglChoosePixelFormatARB. But it’s the same idea either way. It’s also more likely to get what you want.

The window also needs to be destroyed and recreated, if the pixel format is already set. It is not recommended to set pixel format for a window more than once.

About 20-25% of all supported pixel formats don’t have depth buffer, so it is unlikely that Windows will always return a pixel format with depth. By the way, some of the formats are not hardware accelerated.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.