General OpenGL programming question

Ok,

Suppose I take some basic openGL code that compiles and runs. I run the code on a PC with a basic video card, and then change the card to a super fast video card. I’m assuming that the code will run a lot faster, correct? I say this, because I’ve picked up some code that I need to maintain and it’s written in OpenGL, the thing is that the code is running on a superfast PC with a top end ASUS video card, and the CPU at times when displaying video is 100% flat out. I’m assuming here it isn’t using the GPU on the card? Or will it automatically use the GPU when the drivers have been installed?

Many thanks in advance.

The driver will decide when to use the CPU and when to use the GPU. If graphics aren’t accelerated by the GPU you’ll know it because you’ll have a really poor framerate. That’s when it is time to install a new driver.

You know, every opengl app I’ve ever written in Windows pegs the CPU. I always figured it was just because Windows sucks, but there’s probably a more scientific reason for it.

You may still get 100% CPU usage (it’s actually the most probable sceanrio, unless the code is manually stalled with some sort of wait/delay). The CPU and the GPU are working in parallell, and in a decently well balanced system both the CPU and the GPU will be working at 100%.

If you are updating your display as quickly as possible, which is what you usually should be doing if you have animations, the CPU will generally be at or near 100% due to the continuous processing of WM_PAINT messages.

Thanks for all the info, it’s starting to make sense now. I figured that the CPU wouldn’t should be at 100% because the GPU would be doing the work!

Thanks again!