PDA

View Full Version : T&L problem using nVIDIAGeforce



boros
08-04-2000, 06:15 AM
I develope some OpenGl programs in my spare time. NVIDIA homepage claims, that only the new games can take the advantage of T&L functions moved to graphic card. I thought, that the old programs will use automaticly this feature when using such a card and driver.
If this doesn't work, what must I change in my old developing context (VC++, SGI OpenGL implementation from 1999) to use the T&L on the GPU in place off CPU?

I thought, that there are two states: usig hardware acceleration or not. Now NVIDIA says, that GeForce will provide normal hardware acceleration with older OpenGL applications, but without T&L feature.

Is the problem in me (sure...), and I don't understand well somtehing basics in OpenGL?

Please send me some words at least referring to the last question.....

Laszlo Boros

Bob
08-05-2000, 12:31 AM
To be able to use HWT&L, you must use OpenGL's own transformation/lightning commans (glRotate, glTranslate and so on). When you call these functions, your driver will automatically perform these in hardware because they are coded this way. If you instead do your own transformationroutines, there is no way your drivers know that a certain function, or a certain piece of your program, is transformation, and therefore will be executed in software mode, even if it's a GeForce or any other card with HWT&L capabilities. And this applies to Direct3D aswell, so it's not an OpenGL-issue only.

So to answer you question on what you need to change in your old program, I say: Replace all transformations (and ligtningt too) with OpenGL functions, and let OpenGL to the work for you.

boros
08-07-2000, 11:09 PM
Thanks, Bob!
Your answer was the clearest for me. I posted first this question in 'beginners' forum. Many answers are there too.

Laszlo Boros