PDA

View Full Version : Static Model



guju
06-12-2002, 11:42 PM
I have a stic model with about 500.000 vertices. What is best to get the best performance. I do not want to use the NV extensions, because i want to be indipendent of NV cards. I tried Display lists, Draw Range Elements. Should i split up my vertex-arrays or my display list? Please help me!!
Juergen

davepermen
06-13-2002, 12:55 AM
use nv extensions if supported, else stick with normal var/display lists (but i guess displaylists are not that useful because they could get quite fat..)

for getting optimal performance and features, you have to code for different gpu's.. too bad, not?

guju
06-13-2002, 12:59 AM
i do not want to stick on the nv extensions and for different gpu's. The application should run on all plattforms and grafic cards. I tried display list, but i can not change the color of the different vertices.

davepermen
06-13-2002, 02:29 AM
use glDrawArrays and thats it..
well.. switching to VAR is quite easy then.. just one functionpointer to allocate memory on the geforce, use this instead of localmem and use the same vertexarrays.. whats the problem?

guju
06-13-2002, 02:32 AM
I thought that glDrawElements ist faster than glDrawArrays?

A027298
06-14-2002, 10:59 AM
aren't 500.000 vertices quite a lot? Which extension do you mean? Doesn't it make more sense to implement for instance an algorithm, which determines only the visible vertices and render then those?
~tOmUsA

guju
06-14-2002, 11:06 AM
I don't want to use NV extensions. In my case the most time all the vertices are visible. Which is the best algorithm for detecting the visible vertices?

Korval
06-14-2002, 11:58 AM
Even if you used VAR, I doubt you'd get great performance. You might get 30fps or so.

guju
06-14-2002, 12:09 PM
You mean it is possible to get 30fps with 500.000 vertices if display only lines?? Then i make a lot of mistakes in my programming.

jwatte
06-14-2002, 01:04 PM
Lines are much slower than triangles on most graphics cards (except possibly very expensive workstation cards such as the Wildcats)

What people are trying to tell you, and you're not hearing, is that once you've made sure that you submit vertex data in an optimal format (floats or shorts) using DrawRangeElements, there's not a whole lot you can do to get better throughput without tuning for a particular card.

The point of extensions is that you use them if they're there, and you fall back to the non-extension case when they're not there. Thus, you can test for NV_VAR, and if it's there, use that to allocate your vertex array and enable VertexArrayRange, else use malloc(). Very simple, makes nVIDIA cards run faster, and doesn't change the speed or compatibility on other cards.