Look at this place: NVIDIA’s Developer Relations Site .
You’ll find a lot of docs, demos, etc…
Check all presentations and whitepapers - it’s very interesting and much easier to understand (in comparison with short and arid description of the same OpenGL extensions).
As long as you have proper driver, all you have to do is use OpenGL’s native functions and everything that can be, will be accelerated through hardware. As long as you don’t do your own lightning/transformation that is.
Isn’t there some openGL extensions that you need to use? I hear that a developer has to “support” the new features like the T&L on the GeForce, so they must have to do something extra? It can’t all be automatic?
The way I think of it, nVidia has added some specialized hardware acceleration, and in order to use it you have to use some kind of extension to openGL, that they probably provide. Maybe I wrong - I hope I am, that means it’s easier to code.
As Bob said, You dont need to do anything special to get transform and lightning accelerated with a graphic card that supports it, like geforce.
You certainly do not need to use any extensions…
What might be confusing is the hype of a T&L game, it sounds like they have “enabled” it sometimes!.. But what they really have done is taken advantage of the T&L feature by having more complex geometry… more polygons in the level…
I’ve been trying to compile a small demo program using the multi-texturing units on a Geforce 2, but each time I get a compilation error ‘unresolved external _glMultiTexCoord2f()’, could someone give me some guidelines as to what I’m doing wrong…
Originally posted by srhadden1: I got the impression that you had to enable the T&L computation for it to work.
You were probably reading about a Direct3D game - D3D didn’t allow for T&L acceleration in its original pipeline model, so a D3D app DOES have to explicitly “enable” T&L.
Rob (and Sjonny too): Multitexturing functions is not found in any library file, since they are implemention-specific. You must load them yourself, using wglGetProcAdress(char *ext), where ext is the name of the function you want to load. It can seems to be a tad difficult, but is actually not that hard.
Here are a way to go:
Go to nVidia and get a headerfile with all extensions (works fine for all manufacturers).
Include the file and declare functionpointers.
Assign each functionpointer a value.
PhilipT: Yepp, Q3A (aswell as any OpenGL-based game) will run (ALOT) faster with a GeForce. Why? Because Q3A calls native OpenGL-functions, and all OpenGL-functions is loaded (when you start Windows) from the current driver you have installed. If your drivers can perform function X in hardware, each call to X will be performed in hardware. If your driver does not support hardware, X will be run in software (since this is how X is written by the manufacturer).