P-buffer pixelformat 128bpp (Doesn't work on Nvidia)

I’ve got a number of calculations going on off-screen, these are subsequently read from the pBuffer using glReadPixels. I set up the pixelformat like this:

int pf_attr[] =
{
WGL_SUPPORT_OPENGL_ARB, TRUE,
WGL_DRAW_TO_PBUFFER_ARB, TRUE,
WGL_BIND_TO_TEXTURE_RGBA_ARB, TRUE,
WGL_RED_BITS_ARB, 32,
WGL_GREEN_BITS_ARB, 32,
WGL_BLUE_BITS_ARB, 32,
WGL_ALPHA_BITS_ARB, 32,
WGL_DEPTH_BITS_ARB, 16,
WGL_DOUBLE_BUFFER_ARB, FALSE,
0
};

wglChoosePixelFormatARB( g_hDC,(const int*)pf_attr, NULL, 1, &pixelFormat, &count);

This works fine on most recent ATI cards, but no Nvidia(Gf5950\Gf6800\Quadro w. most recent drivers) will provide me with a pixelformat with 128bpp! I’ve also tried:

… WGL_COLOR_BITS_EXT, 128, …

Is it even possible getting high-precision pbuffers on an Nvidia? (RealTech’s lovely OGL extensionsviewer says that the Nvidia cards I’ve tried don’t have 128bpp pixelformats but that can’t be true!?)

Try this:
WGL_DEPTH_BITS_ARB, 24

You didn’t specify:

WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_FLOAT_ARB

The default is WGL_TYPE_RGBA_ARB. I believe the ATI implementation is in error here, although the spec is ambiguous on this point. It doesn’t say anywhere that TYPE_RGBA requires fixed point buffers, but the color_buffer_float extension adds the TYPE_RGBA_FLOAT token, which kind of implies it.

AFAIK ATI doesn’t support 128-bit fixed point color buffers. Please correct me if I am wrong! After binding the context, you can query GetBooleanv(RGBA_FLOAT_MODE_ARB, &isFloat) to confirm.

If you specify this flag, the NVIDIA driver will give you what you want.