NV_vertex_program extension

Hi all

I’ve got a Geforce2 GTS, and the new Detonator 12.60 Windows 2000 drivers. I wrote a vertex interpolation vertex program (in OpenGL) to morph between two vertices according to a parameter t (used for keyframe based avatar animations)…and it ran DEAD SLOW! I then cut out as much code as possible to see where the slow down was, and this is where I got too.

My vertex program is very simple …

char VPmesh_deformation [] =
"!!VP1.0 # This is the vertex program.
"

// transform vertices to homogeneous clip space
“DP4 o[HPOS].x, c[0], v[OPOS];”
“DP4 o[HPOS].y, c[1], v[OPOS];”
“DP4 o[HPOS].z, c[2], v[OPOS];”
“DP4 o[HPOS].w, c[3], v[OPOS];”

// move texture pts
“MOV o[TEX0], v[TEX0];”

// set color to (1.0, 1.0, 1.0, 1.0)
“MOV o[COL0], c[5];”

“END”;

#endif

And is called every frame by……

glBindProgramNV(GL_VERTEX_PROGRAM_NV, vpid);
glEnable(GL_VERTEX_PROGRAM_NV);

glVertexAttribPointerNV( 0, 3, GL_FLOAT, 0, vertices );

glEnableClientState( GL_VERTEX_ATTRIB_ARRAY0_NV );
glDrawElements( GL_TRIANGLES, numElements, GL_UNSIGNED_INT, indexes);

glDisableClientState( GL_VERTEX_ATTRIB_ARRAY0_NV );

glDisable(GL_VERTEX_PROGRAM_NV);

By enabling the vertex program, the fps is cut to half!

e.g. rendering normally using element arrays, I get 100fps, and by invoking a vertex program (even though I’m hardly doing any work in the vertex program) it cuts the fps to 50fps.

I can only think its something to do with sending the vertices (e.g. glVertexAttribPointerNV( 0, 3, GL_FLOAT, 0, vertices ) or maybe the vp extension isn’t optimised for my graphics card?

Anyone else doing anything similar, or anyone got an idea of why it’s so damn slow?

Thnx

Lloyd

Charismatic Project, UEA Norwich

how much vertices are in the drawn model? its done completely in software, the vertex_program, so you loose the hwtnl… and this is slow on pc’s like my one ( pentium3 500mhz ), so i dont use it as long as i dont have a bether pc or a gf3 or a nforce-board

1 - Each avatar has about 3500 triangles, the test scene im using has 5 avatars.

2 - I’m using a Geforce2, so the vertex program is surely being done in hardware?

No, only GeForce3 hardware is capable of doing vertex programs. GeForce2 or less will emulate them in software.

Oh, and, because of this, don’t use VAR with vertex programs on GeForce2, since the driver has to do the T&L on the CPU.

[This message has been edited by Korval (edited 06-08-2001).]

ah nice one m8ty

thought the GeForce2 supported vertex program in hardware. Oh well, just need to get the boos to buy me a GeForce3

By the way, what do u mean by VAR? I’ve heard it mentioned many times, just not sure what it is

thnx

Lloyd

NV_Vertex_Array_Range. It is a nVidia extension that is used to send vertex data asynchronously (and efficiently) to the graphics chip. It even has functionality to allocate AGP or video memory for vertex data. Because AGP and video memory read access time is excruciatingly slow (for the CPU, since the memory is uncached or across a fairly narrow bus), vertex program code on GeForce2 should not use VAR.

I have a regular GeForce DDR and i have been messing with vertex programs to learn them and such. Im using the plain ol vertex arrays (glEnableClientState( GL_VERTEX_ARRAY )) and they seem to work fine. Now my geometry is not as complex as what was stated on this board. So, since VAR should not be used with a geforce 1 or 2 with NV_vertex_program, is the way im using vertex arrays the ‘right thing to do’?

-SirKnight