Vertex arrays

Hello

I have a question (well I wouldn’t have written this post if I hadn’t ). How to use vertex arrays? Is it difficult? Can all graphics card use them? It would be great if you could give me some code to see how does it work.

Great thanks!

http://home.earthlink.net/~bjonesds12/Programming/OpenGL/VertexArray1WGL/main.cpp Just check out the cubevert, and texcoord arrays, and the Render() function.

Or on my website, I’ve too an example.

How to use vertex arrays?

0–> Everything starts by putting data in an array (obviously).
I raccomand you to have an array for texcoords, one for vertices, one for vertexcolors. Interleaving the data simply does not pay in any way and it’s going hell.

1–> Then, you tell GL that ‘it should pull out informations from that array’.
You do it by calling gl***Pointer(…). The *** pattern may be: Vertex (geometry data), TexCoord (per-vertex texture coordinates, one for each texture unit), Color (per-vertex color), Normal (per-vertex normal). There are also other arrays but they are not widely used and sometimes not accelerated. BTW, I don’t remember them.
I don’t remember parameters right now, sorry. You should have no problems finding out how they work.

2–> Then, you have to tell GL what arrays are enabled. You can do this before (1) to tell teh truth. Anyway, you do this by calling glEnableClientState(***). The *** pattern may be VERTEX_ARRAY, COLOR_ARRAY, NORMAL_ARRAY and so goes on… A separate call for each array you have to enable. Disabling vertex arrays is done by glDisableClientState(…).

3–> Drawing using vertex arrays can be done by using DrawArrays (very inefficient, very simple) or by using DrawElements (actually the fastest). Those calls will pull out data from the enabled arrays so take care!

Vertex arrays can be optimized in many ways, but try out this before.

Is it difficult?

Basic vertex arrays are not however, optimizing them using extensions may become a bit more difficult. Managing them correctly in large scale projects may also become a problem. Usually easy.

Can all graphics card use them?

YES! Standard since GL1.1. You may have a outstanding performance gain (up to 100%! or more) by switching from immediate mode to vertex arrays, and you even don’t need to fetch extensions!

It would be great if you could give me some code to see how does it work.

Other ppl already did. I put my 0.02 for the ‘theorical’ part. I raccomand http://nehe.gamedev.net .

Good luck! [GL!]

Originally posted by Obli:
Interleaving the data simply does not pay in any way and it’s going hell.

Is that true? I was under the impression that interleaved arrays (compiled into display lists for static geometry) was the fastest standard way of drawing something (i have no idea about VBO’s and such like)

?

Allan

Great thanks for your help - Obli especially ! It doesn’t look complicated really. I just have to make an effort to write a program with vertex arrays.

Obli also said that it’s the fastest way of drawing in OGL. So what about display lists? There aren’t immediate mode so are they related to vertex arrays?

Thanks

Allan,
Just take a look at how the extensions are evolved… no one is speaking about interleaved vertex arrays. Yes, they provide a theorical speedup but I heard this was never confirmed in real world scenarios. BTW, the format explosion was simply too great, looks like vendors are dropping them. Not 100% sure however, say 95%. :wink:

[—], <-- Uhm, looks strange!
Well, in fact, display lists are probably faster. The point here is that I find difficult to make a comparison.
Display lists were originally designed to allow indirect rendering. Under GLX you can connect to a remote machine, send GL command to it and take just the results back.
This yelds to less network traffic since all the datas are stored in server side memory. The same advantage comes in using it on our machines with 3d accelerators in them since we don’t have to cross the AGP bus.
Now, I should check to see what happens when an array is contained in a display list since I don’t remember right now.

The point is that by using extensions you can allocate vertex arrays in server-side memory also, just as you do with display lists. I actually think this would be the fastest way since drivers are probably very optimized for this task. Furthermore, I think the added flexibility pays well a performance hit (assuming there’s one).

Maybe someone here can write a comparison benchmark with some analisys.

I’ll take a look at how vertex arrays are managed when inside display lists… this will take a while (going to check standard VAs and VAOs)…

Thank you for your feedback! Very appreciated really

Originally posted by Obli:
Maybe someone here can write a comparison benchmark with some analisys.

alakazam!
http://www.fl-tw.com/opengl/GeomBench

quite a nifty program - convinced me that display lists were the way to go over compiled vertex arrays…

Enjoy

Allan

Originally posted by Allan Walton:
[b] Is that true? I was under the impression that interleaved arrays (compiled into display lists for static geometry) was the fastest standard way of drawing something (i have no idea about VBO’s and such like)

?

Allan

[/b]

Of course, during compiling the display list, the driver is free to arrange the data in the most efficient way, thus it does not matter at all whether data was originally in an interleaved array or not.

Interleaved arrays suck big time when you need to update only a part of the data, e.g. only texcoords or only normals - in that case you’d need to upload all data into AGP/video memory (too bad for DirectX, which only has FVF (i.e. interleaved) buffers).

~velco

Originally posted by Obli:
[b] 0–> Everything starts by putting data in an array (obviously).
I raccomand you to have an array for texcoords, one for vertices, one for vertexcolors. Interleaving the data simply does not pay in any way and it’s going hell.

1–> Then, you tell GL that ‘it should pull out informations from that array’.
You do it by calling gl***Pointer(…). The *** pattern may be: Vertex (geometry data), TexCoord (per-vertex texture coordinates, one for each texture unit), Color (per-vertex color), Normal (per-vertex normal). There are also other arrays but they are not widely used and sometimes not accelerated. BTW, I don’t remember them.
I don’t remember parameters right now, sorry. You should have no problems finding out how they work.

2–> Then, you have to tell GL what arrays are enabled. You can do this before (1) to tell teh truth. Anyway, you do this by calling glEnableClientState(***). The *** pattern may be VERTEX_ARRAY, COLOR_ARRAY, NORMAL_ARRAY and so goes on… A separate call for each array you have to enable. Disabling vertex arrays is done by glDisableClientState(…).

3–> Drawing using vertex arrays can be done by using DrawArrays (very inefficient, very simple) or by using DrawElements (actually the fastest). Those calls will pull out data from the enabled arrays so take care!

Vertex arrays can be optimized in many ways, but try out this before.

May it works with normal array with TRIANGLE_STRIP primitives ?
M@o

Originally posted by Obli:
The point is that by using extensions you can allocate vertex arrays in server-side memory also, just as you do with display lists. I actually think this would be the fastest way since drivers are probably very optimized for this task. Furthermore, I think the added flexibility pays well a performance hit (assuming there’s one).

Yeah, but the problem is that I have never used extensions before and I don’t know how to use them really
What’s more, extensions are hardware dependent (right?) so it won’t be compatible with all graphics cards…

Thanks

PS. [—] <- I know it is strange

Yesterday, I made some simple benchmarks using the program Allan told me.
I cannot be sure, I will try to contact the author about the information I got however, it turns out that

Display lists on my hw/sw are slightly slower than compiled vertex arrays.
More on that as soon as I’ll get info from the author.

BTW, looks like vertex arrays (core feature) + Compiled vertex arrays (supported by every video card here and very easy to use) can give you an impressive performance boost alone.

More sophisticated extensions are effectively, tougher to use and does not give the same performance speedup.

I also have some doubts on the validity of the benchmark itself. Need to know how it works. Having one which uses dynamic geometry with support for ARB_vertex_buffer_object would be nice.

If someone wants to know how to use CVAs, I can tell you the specification is actually very clear, you should be able to read it and get it working.

Bye!