Vertex Array Range....Why??

I’ve been doing some thinking, and I think that it is a bad idea to tie an OS specific extension into the primary function of OpenGL. I’m not sure why this extension if a wgl extension.

What’s more is that I don’t understand why a vertex object extension wasn’t issued
instead. One akin to the texture object extension would give a pretty good performance boost, and would be a better fit into the API.

Don’t get me wrong, I think that you can do some cool stuff with VAR, but it at a weird level of abstraction. I had suggested doing something similar with texture memory in order to get Unreal to run faster, but was quickly shot down.

So I’m interested in people’s thoughts…

Keith

The basic premise of VAR is to get around the driver having to copy your vertex data out of your arrays and into its own array structure, especially at the glDrawElements stage. It isn’t a wgl extension because it is a nVidia extension; GeForce level hardware does run on Linux boxes, which do not have wgl functions (they use glx, I believe). Part of it is wgl (and glx), but that only has to do with memory allocation routines that are very OS specific (like, for instance, allocating uncached AGP and video memory).

The main reason, I would guess, that it wasn’t implemented as vertex objects is so that existing code can be quickly translated to VAR usage. Notice that, aside from general setup stuff, VARs are used almost exactly like regular vartex arrays. A vertex object extension would force developers to use yet another completely different interface for sending vertex data (and there are already 3 ways to send vertex data. Isn’t that enough?).

Also, much like texture objects, vertex objects would be difficult to change dynamically. As it stands, VAR has no problem dealing with dynamic vertex data (as long as you’re using AGP memory with fast writes).

The LockArrays() extension already takes care of the “no necessary copies” part.

The real deal with VertexArrayRange() is probably that the driver can lock the memory for DMA and create the scatter/gather table and do other kernel-level set-up ONCE, instead of having to do it per API call. Once the VertexArrayRange() is established, I can see how the nVidia OpenGL implementation doesn’t need to call into the kernel at all except to swap buffers, or if you do WGL set-up/configuration changes.

As for LockArrays, not true; the EXT_compiled_vertex_array extension is a very poorly specified extension, and it doesn’t eliminate data copying, since it is completely impractical to lock down the user’s memory on a moment’s notice. (For one, performance would be disastrous; copying is faster.)

I’d say VAR is at the right level of abstraction – it gives you the control you need (and that D3D has never adequately provided!) for real dynamic geometry with high performance.

  • Matt

I thought lock arrays was mainly for multi pass effects? I can’t see how it benefits anything else.

Nutty

For software T&L an OpenGL implementation could cache pre-transformed vertices as a multi-pass optimization. For hardware T&L you can get more efficient transfer to the GPU by putting the data in AGP memory, but pre-transforming is not a particular win.

Thanks -
Cass