Hi folks,
I’m trying to understand the usage of the pixel format descriptor in SetPixelFormat. The arguments to the function are:
SetPixelFormat(HDC context, int pixelformat, const PIXELFORMATDESCRIPTOR*);
I have yet to find an adequate description in the literature for how the PFD should be set or what it is subsequently used for.
The PFD seems redundant to me since we’re already giving the index of the pixel format. One would assume that the PFD should be set to the proper values for the given pixel format, but I’ve seen examples where it isn’t even filled in.
I suspect that the system is using it for something because I have gotten different results in the past depending on whether I fill the values in.
Are there flags that can be set that affect the behaviour of SetPixelFormat for a given format? Can someone provide a clear explanation for what this is used for?
Thanks,
Daniel Oberlin