gl_Arb_vertex_buffer_object NVidia performance

Doing lots of googling this morning I found lots of people dealing with surprising performance results using the vertex buffer object extension on nvidia cards (both the old geforce3/4 and new geforcefx).

From what I saw it seems like the best data types to use are float for everything except indices (which should be short) and unsigned byte for colors. Does this seem correct to everyone?

Lots of people were hoping that new drivers would improve some of the performance problems. I just want to get the word out that one VBO demo program I tried was getting 9fps on my machine (1.3ghz athlon + geforce4 ti 4600) using the most recent 45.23 NVidia drivers. I upgraded to beta drivers (52.10) from the NVidia developer site and the performance of this demo program improved to 22fps. Seems like NVidia has cracked the whip on their driver writers since the initial vertex buffer implementation =)

anyone have anything to add to this?

here is a link to the demo in question:
http://www.delphi3d.net/forums/viewtopic.php?t=154

trey

[This message has been edited by Trey (edited 09-23-2003).]

After the 44.03 I have NO TnL !!!
I got the same perf using vertex arrays with the microsoft opengl and VBO under nvidia drivers.

I hope they will fix it soon.

From what I saw it seems like the best data types to use are float for everything except indices (which should be short) and unsigned byte for colors. Does this seem correct to everyone?

That is what the extension spec recommends that people use.

I got the same perf using vertex arrays with the microsoft opengl and VBO under nvidia drivers.

Now that’s not possible. The MS software GL driver can’t render data as fast as even immediate mode on any hardware accelerated driver. That’s simply due to the fact that the MS driver is highly unoptimized and slow.

Now, if you’re saying you get the same performance of VBO’s that you do with regular vertex arrays, that’s possible if you’re not using an optimized format (ie, non-floats).

I use a 3 float XYZ. Run very nice on 44.03 drivers; 30fps with VA, 60 with VBO. But with 45.23 I got 18fps for VA, VBO and 17-18fps with VA and standard driver from microsoft (no VBO of course). It’s probably not totaly software of course.

It’s a 1Mo VBO stored with static hint.

Sometime ago I found that using VBOs and vertex lighting (glLight etc.) would result in a performance lower than that of immediate mode (45.23 drivers), while VBO with lighting off was faster (as expected).
(details in an other post on this forum)

Maybe it’s the same bug? Do you have any OpenGL lights active? Try disabling lighting (glDisable(GL_LIGHTING)), does VBO performance jumps back up?

>>Seems like NVidia has cracked the whip on their driver writers since the initial vertex buffer implementation.

This is described in release info from Det50…

Maybe it’s the same bug? Do you have any OpenGL lights active? Try disabling lighting (glDisable(GL_LIGHTING)), does VBO performance jumps back up?[/b]

No I use no light at all.

I hope Det50 will comes soon.