Vertex Array question - double indexing

I want to apply vertex arrays to my rendering for performance reasons. But I don’t see how to do exactally what I need to do. All my data is static, and right now I just build a display list. I have a large number of vertices(from 20k to 400k), most of the faces are flat, and most faces are coplanar with other faces. So I have lots of points but few normals. I don’t want to send the same normal over and over, so I only issue a new glNormal when I need to. Most of the display list consists of glVertex.

It looks like if I used vertex arrays, I would have to send the same normal with every coplaner vertex, since each position in the vertex array includes the same components.

Is there some way to select which components to send for each vertex, and thus only send a normal when the normal changes? (Hope this makes sense.)

Greg

Nope, you’ll have to duplicate the normal for each vertex.

You MIGHT be able to do something clever using shader programs… Assuming that the normals belong to coherent vertices, you could either specify 4D vertices, with the 4th component used for selecting a normal (though be careful when multiplying your matrices!), and setting the actual normals as constants.

Along the same lines, you could use a normal map, and do a tex-lookup in a fragment shader.

Brian

Originally posted by Humus:
Nope, you’ll have to duplicate the normal for each vertex.

What about not using Normal arrays, batching the like normals and using a single glNormal3f() before each array?

It sounds like gstrickler has already done the hard work with respect to lumping like normals together so it should be relatively easy to implement. If you have a number of normals that are not duplicated, then you could us a Normal array for those items (ie. the left overs).

[This message has been edited by rgpc (edited 05-01-2003).]

Thanks for the replies.

Just for your info - my “domain” is real-time visualization of spacecraft in orbit. I have a home grown app for modeling that addresses our needs to have articulated parts that respond to simulated spacecraft systems (among other things.) But some of the users are importing CAD models. The biggest model now has over 480K triangles - its the space station. The imported CAD files don’t bring texture coordinates or color. So the users are colorizing manually, and only applying texture where needed - like solar panels. But there are lots and lots of small parts, and the CAD guys really like the detail. The modeling tool batches the data down to about a dozen objects based on color and texture.

It’s a balancing act - what is better? More data through the pipeline or more state changes? Right now I build display lists for the model parts and use my own double indexed data to only send glNormals when the normal changes.

I use lod logic to determine when to make large scale chages, but if I apply lod to all the model parts (all those little handles the astronauts use to hold on with) I’ll have many hundred objects again and that works against the batching!.

I could batch the data based on normal direction - all the surfaces that face up together - and that may be quite a few. But that better than just doing what I’m doing now. Don’t know. I keep hoping one day there will be an easy answer to a hard question - but I’m not counting on it.

Oh, buy the way, I’m using projective shadow casting on the model too, so I have to render it twice. But at least the shadow render pass is smart enough to eliminate all the normals.

Thanks