Some advice wether I should use a vertex array?

Hello.
For the second time I post in the advanced section without having an advanced question, but there is this question that is really bothering me and I want clarity in this once and for all.
I want to write a particle system and have already a kind of plan as how to do this.
The actual rendering is all I am still totally un-sure about as I still have not why vertex arrys are so great.
(it would be nice if you followed this link: http://www.opengl.org/discussion_boards/ubb/Forum2/HTML/010972.html )
Would you store the vertices for every particle in one huge array?
Somw guy on the other forum said this would make no sense as the particles move all the time anyway. (I don’t know how this would be done anyway…)
Well, it would really be quite helpfull for me if someone could tell me what is the absolute best way to render a particle system.
Thank you in advance!

Submitting geometry to GL using vertex arrays is the preferred way (it’s most efficient).

Storing four verts per particle is not the best way, though. Instead, you store each particle as a single mid-point. Your particle system should have two functions: step, and render. (most game objects work like that, actually)

In step, you run your particle physics, attraction, fade, death, whatever.

In render, you first generate “right” and “up” vectors, typically by lifting them out of the modelview matrix and normalizing, then scaling based on distance. Then you walk each live particle, and emit four vertices for each particle; adding/subtracting right and up respecitively to generate a billboarded quad.

Then you submit the vertex array. Note that you’ll need one array per texture you use in your particle system; with texture sheeting, you can usually get away with a single array.

If your particles have orientation, you have to generate the orientation vectors per particle (or per particle group, if you make such an optimization) and add to the mid-point; this amounts to software transform, but it’s still more efficient to do that and submit a vertex array, than to submit a zillion separate modelview matrices and a zillion different small primitives.

Oh. Thanks that makes sense, it doesn’t sound very easy though.

In render, you first generate “right” and “up” vectors, typically by lifting them out of the modelview matrix and normalizing, then scaling based on distance. Then you walk each live particle, and emit four vertices for each particle; adding/subtracting right and up respecitively to generate a billboarded quad.

This seems extra-tricky.
What do you mean by orientaion?
Thanks for the help! I guess I have to look for an tutorial now that helpes me doing what you stated in the abow qoute.
Thanks again.

That part actually isn’t as hard as it sounds. Here’s a great tutorial that helped me:
http://www.lighthouse3d.com/opengl/billboarding/

Particularly you’re interested in the “True Billboards” section, to see the implementation of what jwatte was telling you.

Hope that helps!

Hmmm. interesting.
I guess that will get me going.
But there are som many billboarding types to chose from…
Thanks for the link!

Particles with orientation would be particles that aren’t just billboards. Suppose you blow chunks out of a mountainside. You might generate a spray of tetrahedron-shaped mini-boulders that spin away. Because these are almost real geometry, they have orientation, and thus a modelview matrix each (to spin and position each block correctly).

However, in a situation like that, you’re usually still better off doing software transform on each piece, and emitting them all into one big vertex array, and then drawing that vertex array. If you do block->world transformation, then you can still let the card do world->eye and eye->projection, so you don’t need to use the x86 divider (shudder!) for that.

Hello.
I hope this post is still alive as I have another (or maybe two) questions.
I managed to do billboarding now (thanks to the article), computing the verteices manually so that I can use vertex arrays later.

GetUpRightVector();
UR = PWPos + myRight * size * 0.5 + myUp * size;
UL = PWPos - myRight * size * 0.5 + myUp * size;
LR = PWPos + myRight * size * 0.5;
LL = PWPos - myRight * size * 0.5;

Here is the code. (I know it looks ugly, ignore size and 0.5) UR is the Upper Right vertex and so on… PWPos is the world-position of my object, the right/up-vector I get via a function. The whole thing is working. So far so good.
Now I stumbled over a particle-article where there is this code:

// calculate cameraspace position
Vector3 csPos = gCamera->GetViewMatrix() * particles[nr].position;
// set up shape vertex positions
shapes[nr].vertex[0] = csPos + Vector3(-particles[nr].size, particles[nr].size, 0);
shapes[nr].vertex[1] = csPos + Vector3( particles[nr].size, particles[nr].size, 0);
shapes[nr].vertex[2] = csPos + Vector3( particles[nr].size, -particles[nr].size, 0);
shapes[nr].vertex[3] = csPos + Vector3(-particles[nr].size, -particles[nr].size, 0);

Apart from that I cant interprete how I would get the CameraMatrix, what do you think is faster?

And another vertex array question.
How can I texture using a vertex array?
(I am quite sure that I need no more than the same 4 texture coordinates for ALL particles, would it no be a waste to store them all? But as I want to call glDrawElements only once per effect, what can I do?)

Hm that was quite a lot, thanks for any reply.

[This message has been edited by B_old (edited 12-09-2002).]

The second code you posted submits vertices in screen space. The first code snippet submits vertices in world space (where you have help from Z buffering and camera orientation).

You have to store texture coordinates for each vertex that you submit. You can pre-allocate an array of appropriate vertex coordinates, and just point at it; no need writing to it every time.

Aha OK. Then I will stick to the first variant.
About the texture-array-thing I am still not sure but I guess I will find a way sometime.
Thanks for your help guys!

For the texture coord array:

Like jwatte stated above, since your poly’s will always have the same texture coords, generate that array at init and you won’t have to touch it again. Then you can alter the vertex array as much as you want to move/update your particles, but the texture coords will still match up.

I see what you are trying to do by saving memory, etc, but there is no way to reuse the 4 texture coords for all vertices. Just make the whole big array, memory is cheap these days.

My 2 cents, hope it helps.

OK. Hello.

I have just made a little test where I draw 128 billboarded untextured quads.
With the vertex array approach it is almost 30 (!) fps slower than immediate mode. What am I doing wrong?
Thanks for any help.

I am unhappy now

Are they the same size? Are you geometry transfer or fill rate bound? Is the code identical, except for the way vertices are submitted? Do you use DrawRangeElements() to submit the geometry?

There’s lots of threads on this board talking about how to efficiently submit vertex arrays; you could go spelunking in the archives.

Hello.
Everything is identical.
I have a loop where I compute the four vertices and store them in a array. One time I draw the quad as fast the four quads are computed (fast).
The other time I wait for the loop to finish and draw the finished array of verteices via OGL vertex-arrays (slow).
I use glDrawElements once. No RangeElements.
If you would look at my code I would gladly give it to you, maybe my approach is totaly weird.

Another question. If I draw the quads really clode together everything gets very slow. How shall that work with I particle system?

EDIT:
I know now that there is some kind of error in my code. I have not found the reason for that yet though.
I would still appreciate if you had a look at my code.

EDIT:

I works now. And it is faster than immediate mode! Yeah!
Getting faster the more quads I draw (I mean compared to immediate mode )
Man, this is so great!
hmmmm cannot stop smiling.
Thanks for your help!

[This message has been edited by B_old (edited 12-15-2002).]