32bit colours=fast! 16bit colours=very slow??

Howdy

I’ve got a prob. I’ve been reading thru past posts that are similar but I can’t work out a solution.

I’m testing an OpenGL app on a box with a GF4 440 card. The problem I’m having is that if I run it with a 32bit colour depth video mode it runs fine. But if I run it with a 16bit colour depth it slows to about 1fps! I thought 16bit would be faster than 32bit?

Is there a common reason for this? I’m not doing anything special. It can obviously handle what I’m drawing because its fine with 32. I’ve tried loading the textures with a GL_RGB5 flag to force it, but it made no diff.

Any suggestions would be very much appreciated.

Thanks!

Rob.

32bits is faster than 16bits on nvidia cards.

I remember nvidia cards are optimized for 32bit texture. maybe the app slows down because you force it to use an unoptimized internal format of 16bit. but 1fps seems to be too slow…maybe someone more familiar with nvidia cards can answer that question.

Ahh I see. I didn’t know that. So there’s nothing that can be done?

It’s such a massive drop in speed that it seems unnatural.

Thanks!

Rob.

Probably your application is using the stencil buffer, which is supported in hardware only for 32 bit. In 16 bit, the graphics driver falls back to software rendering.

You’re right, I am using the stencil buffer. But I already tried turning that off. It does speed things up, but it still doesn’t run close to as fast as it does in 32bit does with or without the stencil buffer.

So I’m still surprised by the huge drop. It must be card related tho. I’ve tried it with an ATI card and it’s fine under both.

Rob.

You should, also, use a 16-bit depth buffer when using a 16-bit color buffer.

ATI cards don’t support 16-bit depth buffers, BTW…

You may very well be running a non-ICD pixel format.

There’s only a limited number of 16-bit pixel formats available on my Radeon 9800 and most of them aren’t ICD acellerated… If you don’t check the actual pixel format for “generic” or “generic accellerated” you could be running in a software or MCD format.

You might even want to check if you’re even getting a 16-bit format on the ATI card. The pixel format descriptor you pass to ChoosePixelFormat is merely a hint, the driver’s free to return anything it deems “closest” to what you asked for :slight_smile:

Case and point, requesting a 16-bit or 32-bit depth buffer on ATI hardware will ALWAYS return a 24-bit buffer. If you run through ALL of the pixel formats reported by the driver, you’ll notice they’re ALL based on a 24-bit depth buffer. The driver has no choice but to return a 24-bit depth buffer even if you ask for 16-bit or 32-bit.

This is especially the case if the desktop is set to 32-bit color and you request a 16-bit color pixel format.

    /* Set the display to 16-bit */
    DEVMODE dm;
    ZeroMemory (&dm, sizeof (DEVMODE));

    dm.dmSize       = sizeof (DEVMODE);
    dm.dmFields     = DM_BITSPERPEL;
    dm.dmBitsPerPel = 16;

    ChangeDisplaySettings (&dm, CDS_FULLSCREEN); /* FULLSCREEN is misleading, this really just means to
                                                    restore the original display settings when the application exits. */
    /* pfd's the Described Pixel Format... */

    /* Check for ICD pixel format */
    if (  (pfd.dwFlags & PFD_GENERIC_FORMAT)      &&
        ! (pfd.dwFlags & PFD_GENERIC_ACCELERATED) ) {
      /* This is an ICD format */
    } else {
      /* This is an MCD or reference format expect poor performance */
    }

Thanks for the info. I am actually checking to see if its accelerated or not. But it’s the nVidia card that’s having the problem. The ATI runs fine.

However, even in the super slow 16bit mode on the nVidia, it’s still reporting that it’s accelerated. I’m still surprised that nVidia cards just don’t do 16bit fast. It doesn’t seem right when 32bit is fine.

Rob.

I’ve just been doing some fiddling with some of my other programs, and I think I was right. There has to be a reason for it in the program because others I’ve written work fine under 16bit. So it can’t just be the nVidia card (I think someone suggested that they’re generally slow at 16bit).

So there must be something I’m doing wrong creating the window or something?

Rob.

Hello once again,

Looks like it could just be the stencil buffer under 16bit. Turning it off it’s almost the same as 32 I think. Altho 32 works fine with the stencil buffer, which must be what was referred to before about the card not handling it under 16bit.

I think I’m starting to confuse myself. Sorry abotu that.

Thanks for all the help!

Rob.

Originally posted by whyjld:
32bits is faster than 16bits on nvidia cards.
Bull****.

Originally posted by Nil_z:
I remember nvidia cards are optimized for 32bit texture. maybe the app slows down because you force it to use an unoptimized internal format of 16bit.
Bull****.

Originally posted by UselessRob:
[b]I’ve just been doing some fiddling with some of my other programs, and I think I was right. There has to be a reason for it in the program because others I’ve written work fine under 16bit. So it can’t just be the nVidia card (I think someone suggested that they’re generally slow at 16bit).

So there must be something I’m doing wrong creating the window or something?

Rob.[/b]
If you render into a window, you are at the mercy of destop dispay depth. I.e. if you have 16 bpp set as your desktop display depth, you’ll only get 16 bit coler (RGB565, no alpha) plus 16 bit depth. Under these circumstances, if you try to use destination alpha and/or the stencil buffer, OpenGL will fall back to software rendering! If you have your desktop set to 32 bpp, you can also get destination alpha and a hardware accelerated stencil buffer.

Fullscreen apps aren’t dependent on destop color depth at all. So you should specify whether you are doing windowed or fullscreen rendering. This will make it much easier for us to help you.

Ahh yes. Sorry. They’re all full screen. They are actually running in the depth I’ve asked for (it looks slightly diff in 16 bit). I’ve also got the desktop set at 16bit for testing. When I set the program to switch to 32bit when it runs it works fine, but if I let it use the 16bit depth its alot slower with the stencil buffer going.

Thanks.

Rob.

What does the vendor string say when you enable the Stencil buffer under 16bit? I remember a post quite sometime ago where the Vendor string reported that it was “ForceSW” ie. Instead of saying “Microsft Opengl…” it said “nVidia Detonator xx.xx ForceSW”. In this case the nVidia driver has reverted to a software emulation.

It’s probably worth enumerating your display modes to see what display modes are supported with 16 bit color and stencil, and then try using one of those modes. There used to be a tool kicking around that would list the modes for you - no doubt someone on this list will have a link to that tool. Personally I’d just display a message when the user selects 16 bit mode. Something like “What are you doing? This isn’t a Voodoo2…” :smiley: