PDA

View Full Version : T&L with nVIDIAGeforce



boros
08-04-2000, 06:07 AM
I develope some OpenGl programs in my spare time. NVIDIA homepage claims, that only the new games can take the advantage of T&L functions moved to graphic card. I thought, that the old programs will use automaticly this feature when using such a card and driver.
If this doesn't work, what must I change in my old developing context (VC++, SGI OpenGL implementation from 1999) to use the T&L on the GPU in place off CPU?

I thought, that there are two states: usig hardware acceleration or not. Now NVIDIA says, that GeForce will provide normal hardware acceleration with older OpenGL applications, but without T&L feature.

Is the problem in me (sure...), and I don't understand well somtehing basics in OpenGL?

Please send me some words at least referring to the last guestion.....

Laszlo Boros

ribblem
08-04-2000, 07:45 AM
What you're getting confused is that T&L doesn't work with older versions of DirectX. T&L does work with openGL. However if you want to make your program work efficently with nvidia's GPU there are a lot of efficency tricks you can use. A lot of these tricks can be found on nvidia's developer sight. http://www.nvidia.com/Developer.nsf

Gorg
08-04-2000, 07:46 AM
My only is guess is because old opengl games doesn't use glTranslate and glRotate to do the transformations. They implement their own fast routine( using 3dnow, SIMD, etc).

Pauly
08-04-2000, 02:43 PM
People don't seem to get hardware T&L at all...

If you're writing a game engine you need to know where all your objects are. Each object (all aligned the same way when created) has a matrix assigned to it that represents its rotation and its position.

You use 'proper' math functions to modify these - not OpenGL commands - so you can get at the results.

The results are useful for collision detection, selection, simulation in general...

Hardware T&L is still useful because you use glMultMatrix (a gfx card function) to multiply all the vertices of your objects with its matrix to the proper position.

Paul.

[This message has been edited by Pauly (edited 08-04-2000).]

08-04-2000, 07:11 PM
Your post seems confusing Pauly... what do you say? when you say "You use 'proper'" should that say: "You should use"?? because I currently use glTranslatef() glRotate()... so, in the end. Do i get hardware accel or not!? with opengl

Gorg
08-04-2000, 07:12 PM
Homer Jay : yes!!!

boros
08-07-2000, 11:05 PM
Thanks for the answers. I become an answer from nVidia/John Spitzer too. (that's nice from a company!!!!) He say:

'OpenGL HW T&L should work automatically with any of the GeForce products
(including GeForce2 GTS). No additional work is necessary.'

"Actually, the glRotatef, glTranslatef, etc. commands require a decent amount
of CPU effort as well. They shouldn't be used indisciminately, or your
performance will suffer. The vertex transformation (through both modelview
and projection matrices) is, of course, done on the GPU. Games like Quake3
Arena, though not specifically coded for "HW T&L" use OpenGL for 3D
transformation, and will utilize the GPU's transformation capabilities
anyway. Q3A does not, however, utilize OpenGL's lighting capabilities, and
thus that part is not HW accelerated, but done on the CPU. "

The problem is now cleared, not?
Laszlo Boros

[This message has been edited by boros (edited 08-08-2000).]

ribblem
08-08-2000, 06:22 AM
Yup Nvidia is a pretty good company when it comes to answering questions.

What Pauly was talking about was how in games where graphics aren't the only important thing you have to rotate and translate you're object (enemies and and other things) NOT using gl calls. The reason to do this is so that you know where you're enemies are in global coords; however, games also do a ton of modification of tex coords and that sort of thing that will be helped out by the gpu. As long as you're using gl calls to modify you're objects you get the benefit of the gpu just like that guy from nvidia said.