NVIDIA VAR Question

Hello,

I am trying to use the NV var extension and compare its performnce with conventional vertex arrays.
Gues what? the later works faster on average by about 10 FPS.

My OpenGL driver is 1.4 on GeForce 2 MX 400 2x AGP.
The rendered model is a highly detiled capsule created in Lightwave.

Is VAR a fake or what? or should it be used with specific geometry configuration?

it all depends on how you use it… and if all your drivers ( chipset/AGP ) works as they should.

And i do suggest you using VBO instead… they are simpler to use, both ati and nvidia supports them ( if not more vendors)…

tHANKS.
Wow, but my card is a outdated (GeForce 2 MX), does VBO work if 1.5 drivers installed?

nVidia seems to support VBO for tnt2 and up… you just have to get a later driver if it doenst show up in your extensionlist.

the beauty of VBO is that it should “work” even if the hardware doenst accelerate it, that is the worst preformance you should get from them are the same as normal VA and if the hardware can utilize it, it will be faster on those cards… but from a developer point of view you just use them ( no need for multiple paths when more drivers have it )

Back to VAR: first of all, make sure you’re bandwidth limited, because that’s the area where VAR helps. You need a decent amount of vertices (many hundred thousands) in order for VAR to shine.

Second of all, make sure you’re not doing a stupid mistake like specifying the range every frame, or rewriting the data every frame. I’m assuming your scene is 100% static. Also make sure you’re using a correct priority when allocating the memory, that you allocate NV memory only once, that you do not enable/disable VAR every time, and that you do not store your indices into VAR memory (these should be left in sys mem).

Y.