Maximized window causes big slow down in MDI App

Hi, I’m experiencing some trouble in an MDI application that I have developed using MFC. If the child window containing the GL viewport is maximised then the GL windows that I have rendering to run away with the processor.

Does anyone have any experience of this? Is there something that I need to do in the GL state to correct the problem?

Any help appreciated

Regards

matthew

Maybe you’re using software acceleration, which frequently slows to a crawl when the whole screen is in action even on the most powerful of systems.

Originally posted by ULTIMATE MAXIMUM POWER:
Maybe you’re using software acceleration, which frequently slows to a crawl when the whole screen is in action even on the most powerful of systems.

Try drag and enlarge your MDI slowly, and see if there is a sudden decrease of performance.
When your video card is running out of memory, it may demote back to software mode.

Hi,

since posting I have realised the same thing. I’ve been running at 1600x1200 for so long I must have started to think that everyone does! My machine has a 32 Meg GeForce 2, nothing special and it must have just been coping. I reduced the resolution and maximised the windows and there was no problem whatsoever.

The app. pretty much has to run hardware accelerated. The machines that we build for our customers will have decent nVidia cards in them and it’s pretty hard to find one now with less than 64 meg. At least I know what the problem is now and what to test.

So, thanks for your replies. One thing that I would like to know (and so that I can provide a useful log) is how to spot when the card switches back to software rendering?

One thing that I did notice however is that when I switched back from a low res to 16x12 was that the app didn’t run away with the processor anymore even when the MDI windows were maximised. I’m guessing that switching modes recovered all 32 megs on the board that may have been used by other apps that I develop with GL during the day and they had not freed the memory??? I dunno, just idle speculation…

Cheers anyway

Matthew

glGetString(GL_RENDERER) returns the name of the rendering device. if it says anything about “microsoft generic” then you’re in software mode.

The renderer string trick won’t work when the hardware falls back to software rendering during runtime.

Bob,

what do you reckon then? Will the pixel format be changed for the window? I could call GetPixelFormat() then DescribePixelFormat()? I don’t see how this would work either as when the un-maximise the window the performance appears to be ok again, that must mean that GL is promoted to hardware acceleration again.

It’s an interesting problem, but am I wasting everyones time with it? It would be nice to trap changes in rendering mode but it’s not essential. I could just have a note put in the manual about possible severe slowdown if the hardware isn’t up to the job. Less than ideal but…

Matthew

mcsellski, querying hardware or software rendering has been discussed lots of times here before, and each time we get the same answer: in OpenGL it’s not possible, nor will it be in the future. And there are reasons for this, but I won’t go into that now. You can search the forum for more info.

At most, you can use the renderer string (see SThomas post) or look at the pixel format descriptor for the PFD_GENERIC_FORMAT flag. If the string says microsoft and/or the flag is set, you are guaranteed not to have hardware rendering. But this is as far as you can get.

If the renderer string says something else (like the name of your graphics board), and/or the PFD_GENERIC_ACCELERATED flags is set, you don’t know if it’s hardware or software.

The renderer string gives you the name of the device driver that performs the rendering. This does not mean the actual hardware is rendering, but the driver for the hardware is responsible for the output. Whether this comes from the actual hardware, or a software fallback implementation for the hardware, you don’t know, nor can you find out. Same with the PFD_GENERIC_ACCELERATED flag. All it says is that a device driver is responsible for rendering, not the generic implementation.

Now, the above are ways to detect genering software implementation or device driver at startup. Once set, this will never change. If you get hardware acceleration, it’s the device driver that’s performing the rendering. If you do something that the driver don’t like, forcing it into software renderring, it is still the device driver that is responsible for rendering.

When someone here asks for a way to detect hardware of software rendering, in the end it turns out they mean to ask for a way to know if the rendering path is fast or not. Hardware does not automatically mean fast, nor does software automatically mean slow. Saying this, there may be a way to achieve what you want. Render a frame, measure the time taken to render it, and see if it’s acceptable. If it’s fast enough, do you care if it’s performed in software or hardware?

[This message has been edited by Bob (edited 08-09-2002).]

ok, thanks for your reply. At least I have learned something. I didn’t realise that the question had been asked many times before.

Thanks for your time

Matthew

man, sorry for lying to you mcsellski. thanks for clearing up my deception bob.

You don’t have to apologize SThomas, cause you were right. You CAN use the renderer string to determine software rendering. As you say, if it says something with Microsoft you have software rendering. That is true.

But the problem with your technique was that it only detects problems during startup, not sudden software fallbacks during runtime.

ah, thanks bob, but i’m afraid that what you say is only an attempt at justifying my heinous lies. i’ve administered myself twenty cane lashings so that in the future i’ll better understand the usage of the glGetString() function and its associated parameters.