PDA

View Full Version : Question: VAR on GeForce3



foollove
04-08-2003, 04:25 PM
Hi,everyone here,
I encounter the problem about VAR. My triangles number is up to 0.5million, can I allocate AGP memory correctly? And the vertex data is dynamic,are there good ways to manage the data? Or can you provide any suggestions?

Cass recomended me to use the new ARB extension "vertex_buffer_object". But it is not accessible even after I update my OpenGL1.2 to OpenGL1.4.
My card is GeForce3.
Thanks a lot.

Korval
04-08-2003, 05:10 PM
You would need Beta drivers to gain access to the extension. It's new.

foollove
04-08-2003, 05:52 PM
Hi,can you provide a link? I am not sure which one is what I need.
Thanks

Originally posted by Korval:
You would need Beta drivers to gain access to the extension. It's new.

SThomas
04-08-2003, 06:03 PM
the vbo extension is available in nvidia's latest windows public release drivers (43.45), which you can grab (for win xp/2k) here (http://www.nvidia.com/view.asp?IO=winxp-2k_43.45) . you won't find the extension string, but you can initialize the entry points with wglGetProcAddress() no problem. the extension string will be added once nvidia's done with their testing.

if you have dynamic vertex data, you should specify either STREAM_DRAW_ARB or DYNAMIC_DRAW_ARB (which one to use depends on the app) as the last parameter to glBufferDataARB().

is everyone else getting good vbo performance on nvidia cards? i'm not getting anything better than standard vertex arrays. i'm using the 43.45 drivers on a geforce 4 4400. i'm specifying STATIC_DATA_ARB for the usage parameter, my vertices are 3 floats for the location, 3 floats for the normal, and 2 floats for a tex coord (only one tex coord). i'm using a buffer object for the indices also.

-steve.

Korval
04-08-2003, 07:10 PM
Considering that the drivers don't actually advertise the extension as working, be glad it functions at all http://www.opengl.org/discussion_boards/ubb/wink.gif

More seriously, it has been widely publicied on these forums that implementations have not been fully optimized yet. Considering that the spec was released less than a month ago, that's hardly surprising.

foollove
04-08-2003, 10:11 PM
Thanks all of you.
And I will try VBO now.

Zengar
04-09-2003, 02:50 AM
At my GeForce3 Ti200 VBO gives about 100% performance overhead(400 fps) then standart arrays(210 fps). I render about 300 000 faces.
As I look at it, it seems to be as fast as VAR(about 13 m. tri/sec).
Maybe it's because I use only vertex pos and texture coordinates.

B.T.W do someone knows good tutorial on vertex streaming? I mean something serious, with research about it.

fritzlang
04-09-2003, 02:11 PM
Originally posted by SThomas:

is everyone else getting good vbo performance on nvidia cards? i'm not getting anything better than standard vertex arrays. i'm using the 43.45 drivers on a geforce 4 4400. i'm specifying STATIC_DATA_ARB for the usage parameter, my vertices are 3 floats for the location, 3 floats for the normal, and 2 floats for a tex coord (only one tex coord). i'm using a buffer object for the indices also.
-steve.

Steve!
I have the exact same problems (Geforce3)!
It put me in such a bad mood, I had to go back using VAR again.
What are we doing wrong?
I would love to get it working, if someone can help that would be most greatful.

Cheers.

pkaler
04-09-2003, 03:00 PM
Code up support for VBO with VAR/VAO and malloc fallbacks. Pick at runtime what is faster. VBO should be faster once the drivers develop more.

fritzlang
04-09-2003, 11:41 PM
Originally posted by PK:
Code up support for VBO with VAR/VAO and malloc fallbacks. Pick at runtime what is faster. VBO should be faster once the drivers develop more.

Yes, I agree, and that is what I've done.
Nvidia mentioned earlier that current VBO is implemented for functionality and rather than speed which makes sense. But people are obivously getting near-VAR performance already, with current drivers. For me VAR increases performance dramatically, how come I see no difference at all with VBO? Noone seem to be able to answer why. My guess is that with current drivers, behind the scenes VBO behaves exacly like unextended vertex arrays.

Cheers.

pkaler
04-10-2003, 07:43 AM
Originally posted by fritzlang:
For me VAR increases performance dramatically, how come I see no difference at all with VBO? Noone seem to be able to answer why.

From this thread: http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009196.html


Originally posted by cass:

Just to clarify, VBO was designed to be able to move/keep arrays in system memory when it could recognize that as the optimal location. It will probably take some time for drivers to be able to make this determination well, as the solution to this problem is non-trivial.

fritzlang
04-10-2003, 11:40 AM
Thanks PK.
I did read that post, without directly relating it to my problem.
New working drivers == really something to look fwd to then. ;|

Cheers.

[This message has been edited by fritzlang (edited 04-10-2003).]

foollove
04-11-2003, 05:51 AM
I have installed 43.45 driver.
But now I find I can not use VAR now.
when I use wglGetProcess() to find VBO functions, they all return NULL.
Can someone provide any suggestions?
I am carzy now http://www.opengl.org/discussion_boards/ubb/frown.gif