View Full Version : How do I know if a program is getting hardware aceleration?

08-26-2000, 08:05 AM
Is there any way to know if a program is getting hardware transformation and/or lightning in OpenGL? I mean in code, not by looking at the slow frame-rate http://www.opengl.org/discussion_boards/ubb/redface.gif)

08-26-2000, 09:10 AM
No, there is no general way to do. But maybe you can query the name of the renderer (glGetString(GL_RENDERER)) or similar, and determine if it's a GeForce, or any other card/ship that supports HWT&L.

08-27-2000, 03:52 AM
looking at the frame rate is the singularly best way of knowing what you want to know. but this thread has been discussed to death. search the archives

08-27-2000, 03:54 AM
If you look at Bob's post, you can be fairly certain about the renderer.

If it says (for windows ie http://www.opengl.org/discussion_boards/ubb/smile.gif Microsoft Generic x.x.x (x's are version numbers), then its not hardware. If it has the name of a video vendor (ie: Nvidia, 3dfx, 3dlabs, etc,) or "ICD" in the name, then its hardware.


08-27-2000, 07:04 AM
Well, looking at the framerate is not that strightforward.

You have to know how fast a non-HWT&L would run on the specific platform. And then compare with the actual result. So, on your system, you might have X fps w/HWT&L. And if you get the same framerate on system B, does it mean it have HWT&L? Maybe not, it can be a really fast platform. So then you have to take this into account too, that the system can be alot faster/slower.

A fast platform w/o HWT&L can very well beat a slower platform w/HWT&L.

And, just looking for Nvidia (for example) as the renderer is not enough, you have to look for the specific chip supporting HWT&L (i.e. GeForce for example). 'Cause a TNT doesn't support it http://www.opengl.org/discussion_boards/ubb/smile.gif

08-28-2000, 03:45 AM
Oh, geez. I'm sorry. I thought the original question what how to know if you're simply using hardware rendering, not hardware T&L.

There is absolutely no way to know if you're using hardware T&L, as its native to the GPU, and isn't exposed anywhere. Framerate may be the best indicator you can get of this, but its hard to justify such a result because you cannot "disable" hardware T&L on a chip, so you can't ever make an apples-to-apples comparison.

The only thing you can really do is stick to vendor approved extensions for a particular video subsystem and use glRotate and glTranslate for everything, and pray that your code and the driver uses the T&L system.

Just as a quick note, to enforce the "you'll never really know" notion: Remember the Savage2000? Well it supposedly had hardware T&L, but it was disabled on the chip because it didn't work. Many people didn't know this at first when the product was released, as the card was indeed faster than the previous generation. It wasn't until S3 announced that the T&L functionality had been disabled that people knew.

Thats why S3 pulled out of the graphics biz (finally!).


Hardware T&L... hmm, is that a subset of hardware S&M? http://www.opengl.org/discussion_boards/ubb/smile.gif