16bpp GeForce fails 32bpp ok

I have an OpenGL app running on a GeForce2 that works fine in 32bpp mode. It loads .3ds models and adds bumpmapping. I just tried running it in 16bpp mode and it slows down horribly and Windows2000 irregularly paints over the app with its window background color.

The app is successfully running the hardware as RegisterCombiners/NV_Vertex_Array are used without visible or gl errors.

When i remove all of the OpenGL drawing commands (glDrawRangedElements, glBegin, etc.), the app runs fine.

But if so much as draw 1 triangle (via the traditional glBegin/End), it takes over 400ms for the glEnd to return.

Other programs run fine in 16bpp mode, so there must be something incorrect. Oddly, the WM_PAINT message is repeatedly issued (more messages if i start with a lot of geometry vs a less frequent messages if i use just 1 triangle)

Everything works fine under 32bpp mode-

Any ideas/help is greatly appreciated=

Do you use the stencil buffer?

Thanks j,

I’ve disabled (e.g. commented out) all code with Stencil, Alpha, Blending, and even Textures. All i’m down to is a white triangle and this alone incurs a crazy penalty.

The app runs fine in 16bpp on a Radeon, Mobile GeForce2Go, and the MS-Software RGBA GL software, so i’d assume its a driver issue with this GeForce2, but i can run other 16bpp apps-

This GeForce2 is driver 5.13.01.1080
The GeForce2Go is driver 5.12.01.755

Go get the latest drivers from NVIDIA . You should always keep your drivers as fresh as possible.

Uuuh, well, forget about it. You have the same drivers as I do, the 10.80 ones. Thought for a moment it was 5-series

Anyways, I haven’t experienced any problems with 10.80, so shouldn’t be the driver being bad.

WM_PAINT issued repeatedly? Sounds like you need to put a BeginPaint() EndPaint() pair in the response to the WM_PAINT message.

Thanks Humus,
but i have BeginPaint/EndPaint pairs in there. (Tried removing them, passing back the DefWndProc, and just returning 1, none of which made any difference). I suspect this has something to do with 16bit Zbuffering on Nvidia cards since everything runs fine on Radeon in 16bpp/32bpp and the GeForce2Go has severe slowdown problems when it is running in 32bpp mode with a 16bit Depth Buffer (no paint-message flicker though, just slowdown).

Its a really weird problem because if i turn off the ‘single triangle test’ and run the regular .3ds files, i can see Dot-product stuff working so i assume i haven’t been punted to a software fallback-(Unless Nvidia has a software pathway for Register Combiners??? )

None of the code makes any reference to 32bpp or 16bpp so i’m pretty stumped as to what the difference is and why it works in Radeon/Software mode-

whats the pixelformat you ask for? if you have there for example 8bit stencil in and 24bit depth, it will select

depth = colordepth - stencil, wich results in 8bitdepth… looks great then

GeForce-level drivers support 2 accelerated pixel formats:

16-bit color with a 16-bit z-buffer
32-bit color with a 24-bit z-buffer and 8-bit stencil.

Any other pixel format will not be properly accelerated.

Mahjii, I think that it’s pretty likely that the GeForce ICD does implement extensions for software rendering. That’s because the ICD can switch from hardware to software at any time, and having things stop working won’t be a nice idea. I do wish that NVIDIA provided some way to determine if software or hardware rendering was being used.

Thanks everyone-

Davepermen,
i enumerated and chosen each ‘pixel format’ that is “ICD” and all have the slowdown under 16bpp mode. Not certain why.

Korval,
In 16bpp, indeed the only formats available are the the 16bpp color and 16bpp depth. (Oddly though, the GeForce2Go_ reports an ICD format with 16bpp color and 24 bpp z as well).

ET3D,
it seems quite likely that everything is running in super-slow software mode, even though it reports Nvidia, doesn’t choke on any NVidia extensions, etc. I’m dumbfounded as to what is causing that since if i remove all GL calls except startup, draw 1 white triangle, and shutdown, i’m still in the slow mode in 16bpp mode.

Can you post your startup code? Maybe what you enable will give us some clue.

Sure,

INIT::
{
//@@@
//DEV
Gfx.Dev.hDC = GetDC(Gfx.Dev.hWnd);

tN						pf;
PIXELFORMATDESCRIPTOR	pfd;

M_Clear(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER | PFD_GENERIC_ACCELERATED;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.iLayerType = PFD_MAIN_PLANE;

//—
//Choose Pixel Format
pf = ChoosePixelFormat(Gfx.Dev.hDC, &pfd);
if( (pf IS 0) ) //OR (pfd.cAlphaBits ISNOT 8) )
{ ERR( “ChoosePixelFormat() failed: Cannot find a suitable pixel format.”, Err_Gfx_Init); }

if (SetPixelFormat(Gfx.Dev.hDC, pf, &pfd) IS FALSE) 
  {ERR("SetPixelFormat() failed:  Cannot set format specified.", Err_Gfx_Init);    }
DescribePixelFormat(Gfx.Dev.hDC, pf, sizeof(PIXELFORMATDESCRIPTOR), &pfd);

//—
//Init RC
Gfx.Dev.hRC = wglCreateContext(Gfx.Dev.hDC);
wglMakeCurrent(Gfx.Dev.hDC, Gfx.Dev.hRC);

#define Gfx_GL_Log_pAscii(_name) if( pC(glGetString(GL_VENDOR)) ){LOG( pC(glGetString(GL_VENDOR)) );LOG_LINE_THIN();}
Gfx_GL_Log_pAscii( GL_VENDOR);
Gfx_GL_Log_pAscii( GL_RENDERER);
Gfx_GL_Log_pAscii( GL_VERSION);
Gfx_GL_Log_pAscii( GL_EXTENSIONS);

LOG(“Pf %d Color %d (%d-%d-%d-%d) Depth_Stencil (%d-%d) Accum %d”, pf,
pfd.cColorBits, pfd.cRedBits,pfd.cGreenBits,pfd.cBlueBits,
pfd.cAlphaBits, pfd.cDepthBits, pfd.cStencilBits, pfd.cAccumBits);

Gfx.Dev.bBpp = pfd.cColorBits;
}

RELEASE::
{
if( Gfx_Screen_yesFull() )
{ ChangeDisplaySettings(NULL, 0); }

if( Gfx.Dev.hRC )
{
wglMakeCurrent(NULL, NULL);
wglDeleteContext(Gfx.Dev.hRC);
Gfx.Dev.hRC = 0;
}

if( Gfx.Dev.hDC )
{
ReleaseDC(Gfx.Dev.hWnd, Gfx.Dev.hDC);
Gfx.Dev.hDC =0;
}
}

==============================
(thanks for your help)-
( Hope the CODE /CODE formatting works! )

Arrgh, don’t know why the CODE /CODE formatting screwed up all the tabs…

Hope its all readable-

Oops. What I actually wanted (should have made it more clear) was to see what you enable in OpenGL before you start rendering. From my experience, the NVIDIA ICD only changes to software rendering once you enable features it doesn’t support in hardware.

What do you get for the bits counts (red bits, stencil bits, etc.)?

Hmm, the odd thing is that even if i remove all the GL code except that INIT/RELEASE stuff, and just add glBegin/…1 white triagle…glEnd & SwapBuffers, i still have the weird slowdown in 16bpp mode (but never in 32bpp mode).

I get RGBA 8-8-8-8, Depth 24 Stencil 8 for pixel format 3 in 32bpp mode and RGBA 5650, Depth 16 Stencil 0 for format 9 in 16bpp mode.