Vertex Array Question

I’ve written an objloader that loads up shared vertices, normals, tex_coords and vertex colours.

My question is this:

The number of vertices & normals etc are not the same. They all are refernced by the number of their position in the relevant array. ie, For one particlar vertex I may be using vertex 12, normal 72, tex_coord 33 & vertex colour 54.

But from what I gather by reading up on the subject, the first vertex in the array need to correspond to the first normal in the normal array so on and so forth.

If this is the case then surely I’ll need to create new arrays in which to store this data which effectivly destroys the point of trying to use shared vertex information.

Have I missed the point, overlooked some way of using them, or am I just thick?

Any help would be much appreciated.

That is correct. Shared data can not be used in a vertex array, normal array, and so on.

If that’s the case then Vertex arrays are useless… What’s the point in using vertex arrays then…

However you can still use shared verteces and texture coords, normals, etc… But you have to create your own data structures for them…

example:

typedef struct _normal
{
GLfloat n[ 4 ];
}NORMAL;
//------
typedef struct _vertex
{
GLFloat v[ 4 ];
}VERTEX;
//-------
typedef struct _tcoord
{
GLfloat u[ 2 ];
GLfloat v[ 2 ];
}TCOORD;
//--------
typedef struct _texture
{
GLuint texture;
char* file;
}TEXTURE;
//--------
typedef struct _side
{
int NumSides;
VERTEX* verts;
NORMAL* normal;
TCOORD* tcoord;
TEXTURE* texture;
}SIDE;
//--------
typedef struct _object
{
int NumSides;
SIDE* sides;
}OBJECT;
//----------

//--------Your globals here-------
NORMAL globalnormals = { … };
VERTEX globalverts = { … };
TCOORD globaltccords = { … };
TEXTURE globaltextures = { … };
SIDE globalsides = { … };
OBJECT globalobjects { … };

That was a quick and crude method, but you must have pointers or indices to each element to share vertices… You can expand further adding a skeletal system to it by adding joints, quarterions, restrictions, etc to them…

It makes sense to load your data from a file…

On the contrary. Vertex arrays are very useful. And in certain applications, very very fast, specifically when using compiled vertex arrays while rendering multiple passes.

> On the contrary. Vertex arrays are very useful.

Can you elaborate on that ?

What’s the interest of using VA when you need to do multiple passes and do not have the compiled VA extension ?

All i see is a decrease of around 10% of my application’s performance, compared to immediate mode.

Y.

As far as im concerned, Vertex arrays are a pain in the ass… It screws texture mapping up… Someone said in another thread, to use extra vertices, which will solve that…

What if you had a world data base with 2,000,000 vertecies ?? Would you want to add extra verticies to that ?? No way…

A far better way is not to use them…

Unless someone here has working code which allows you to properly texture map an object ( Cube, tetrahedron, etc ) with vertex arrays then i might consider vertex arrays, but nobody i know has ever done that…

Strange, I use vertex arrays for an application which yields usually 262000+ vertices (512x512 vertex map), AND I also use TexCoord arrays and color arrays along with it. For an idustrial strength height mapper, I’m very very pleased with the results. I have absolutely no problems with the texcoord mapping, and I’m extremely impressed at the idea of using only one function call to be able to render all three arrays.

So what, exactly, is your problem here?

Siwko

The problem (and the reason for my question) is this:

start with a simple quad. The vertices of which are:

v1: 1, 1, 0
v2: -1, 1, 0
v3: -1, -1, 0
v4: 1, -1, 0

At each vertex, the normal would be:

0 , 0 , 1

With vertex arrays, you would have to copy that normal 4 times in order to get them to work.

Now consider a cube:

Each corner is a vertex. So there are eight vertices. There are six faces and a single normal corresponds to each face.

To take that a stage further, lets say we want to apply a texture to each face. We could use 2d texture co-ords which would give use a total of 4 tex_coords for this object. (they are the same for each face)

With vertex arrays instead of having

8 vertices
6 normals
4 tex_coords

We would have

24 vertices
24 normals
24 tex coords

Now think of an object with 1000+ polys and you start to see the problem.

My actual problem goes beyond that…

I’m doing keyframed animation, all the vertices and normals have to be stored for each keyframe. So, say in total I load up 50 key frames for all the characters movements, Thats one hell of a lot more data that I have to store.

But data storage isn’t the only problem, say I have a model with 1300 vertices ( & normals) which has about 1000 faces. I need to then interpolate each vertex and normal in order to get the animation to work. This has to be done every frame.

If I convert that to vertex arrays, then I’m talking about performing calculations on 6000 to 8000 vertices and normals (as opposed to 2500).

Thats up to 5500 more calculations per frame!!!

Now add enemy multiple characters, backgrounds etc and it all starts to get messy.

I hope you see my point.

An additional question:

If anyone has had experience with this…

If i wished to do my keyframing, would it be better to convert all the data in my animation frames to the vertex array ‘style’ and make interpolation easier, or keep them as they are, and each frame work out which vertex and normal belongs where?

Additional:

WarlordQ -

I used to keep the data as you did, Its a nice tidy way of doing it, but,

take for example your normal structure:

you’ve got an array of floats, which is itself contained within an array.

To save on storage, why not just have an array of floating point numbers?

write them in in this fashion:

normal array [] = { x1,y1,z1,x2,y2,z3 … }

If you start doing keyframe animation with those vertices and normals then, at the simplest level, a straight linear interpolation, would look like…

for(i=0 ; i < 3vertex_count; i++)
{
Object_I_Want_To_Draw.vertices[i] = keyframe_one.vertices[i]
(1-time) + keyframe_two.vertices[i]*(time);
}

Try doing that with structures.

This way it eliminates all your structures. I’ve found that this has yielded some performance increases. (not much, but hey; some is good enough)

The other thing is that I have a geForce2, if things are compiled with display lists or vertex arrays they look **** loads nicer. (materials that have emmisive properties actually effect the colour of other surfaces)

This is a big thing for me (hell I spent £250 on the thing, I want it to at least show me why!!!)

My milkshape loader uses a method not unlike warloadQ’s to render multiple surfaces with vertex arrays. It’s linked from my page.

Paul.

Strange, I use vertex arrays for an application which yields usually 262000+ vertices (512x512
vertex map), AND I also use TexCoord arrays and color arrays along with it. For an idustrial
strength height mapper, I’m very very pleased with the results. I have absolutely no problems
with the texcoord mapping, and I’m extremely impressed at the idea of using only one function
call to be able to render all three arrays.
Ok… Post the source code or snipets, so we can see vertex array texture mapping …

To save on storage, why not just have an array of floating point numbers?

            write them in in this fashion:

hehehe … We do… We use pointers to those in our structs, or you can use indices… OK your talking about animataion, we would use a different data structure for those, an object heirachial structure with linked lists…

But how would you texture mapp that ??? or will they be flat shaded ??

With vertex arrays instead of having

8 vertices
6 normals
4 tex_coords

We would have

24 vertices
24 normals
24 tex coords

4 tex_coords ??? How is that ???
mate i would have to tell you, that’s the theory of vertex arrays… But when you have to texture map them, that is not the case… You will be using more than 8 verticies, someone else told me that’s the only way to do texture mapping with vertex arrays…

So far nobody has shown any working code with vertex arrays of a cube texture mapped, with 8 verticies, 6 normals, 4 texture coords… If anyone has, post the code, please !!! I would love to see that…

Will your compiled vertex arrays work on a voodoo card ?? or Matrox g400 ?? just curious…

[This message has been edited by warlordQ (edited 08-01-2000).]

My 2c.

If storage requirements are your main concern, the data structure with 8 vertices, 6 normals and 4 texture coordinates is what you are left with.
So the answer is, forget vertex array for a moment and think about how easy it is to feed those into the immediate mode functions glVertex3fv, glNormal3fv, and glTexCoord2fv. You have the indices for all six faces to each of the three arrays. That’s all you need to construct the simplest for-loop in town. And with some optimization you don’t have to send data for identical values, e.g. here one normal for four vertices.
Those functions are blazingly fast on hardware geometry because the API should be only some lines of code there.
Multi-million vertices per second are no problem and it will work on any OpenGL implementation.

For sheer performance reasons, special compiled vertex arrays, vertex array range extensions and display list come to the rescue, but they are not storage optimized.

So it’s your decision.
Readable drawing routine with small storage requirements (BTW, means better cache usage)
or fighting with the design of vertex arrays in OpenGL.

Correct. Using vertex arrays, you can not texture map a cube properly with only 8 vertices and 8 texture coordinates. But there are exceptions to the rule. One, if the cube is not texture mapped then obviously it can be drawn using only 8 vertices. If texture coordinate generation is enabled, then it can be texture mapped, while only using 8 vertices and 8 normals.
If you duplicated just 4 of the vertices then the cube could be texuture mapped correctly with 12 vertices and 12 texture coordinates. But for completely arbitrary texture mapping, which would need 24 texture coordinates, then for a vertex array, because of the one to one relationship it uses between vertics and texture coordinates, you would need 24 vertices. This assuming all sides share the same texture. Since the textures can not be changed mid-way through the execution of the array.

[This message has been edited by DFrey (edited 08-01-2000).]

With vertex arrays instead of having

8 vertices
6 normals
4 tex_coords

We would have

24 vertices
24 normals
24 tex coords

I should have said:

If we use glBegin() / glEnd() with glVertex, glNormal etc. We can apply a texture per face with 2d texture co-ords. eg, the crate texture applied to the cube in the nehe demo’s.

If all the sides have a texture applied between uv coordinates 0,0 and 1,1.

if you were mapping a texture across the whole thing, then obviously this would not be the case, I was just trying to illustrate the point of sharing data…

[This message has been edited by Rob The Bloke (edited 08-01-2000).]

> We would have
>
> 24 vertices
> 24 normals
> 24 tex coords

Hmm not completely true… i’ve seen an implementation of a correct texture-mapped cube using 20 indices (ie, 20 vertices/20 tex coords). But it’s only a special case because 4 vertices of the cube end up by having the same world position and the same tex. coords. We’re still far from 8 :frowning:

Y.

yo!
Okay, I’m a raw beginner but this has some impact on what I’m doing. I have to do projections down to or up to 3 dimensions from n dimensions (typically 4 to 6). The way I store my complex’s (the shape, like a hypercube) is by a list of simplices (n-dimensional triangles) which map indices to a list of vertices (and each vertex is an integer … the data space is only 19 bit accurate).

I’m not sure exactly what ya’ll are saying, but are you saying that I have to have unique vertex coordinates mapped to each simplex in the complex? I’m pretty sure that if I had to do that I’d just have an algorithm unfold each simplex into a master vertex list. It may be okay to hold each vertex uniquely down in 3 space (you’re only at 3*#simplices) but up in 6+ I start to see 80% [1] + savings in memory allocation etc.

[1] assume you start with an n-triangle in n-space; this requires n+1 n-1 triangles and n+1 vertices; in a basic addition routine, split any edge to gain n simplices for a mere 1 more vertex: from this method, in n space, if the complex has 5n+1 simplices, there’s only n+6 vertices, as opposed to (5n+1)*(n) vertices.

I think I get what you are saying

This is pretty much what I’m doing (i Think)

storing an object as a list of triangles (I’m using quads as well)

Then having those faces reference the vertices and normals in seperate arrays.

The discussion is:

Drawing those out with glBegin()/glEnd() using glVertex(), glNormal() etc to draw each vertex of the face.

We can however use vertex arrays to draw the data, which is faster.

The problem is that you can’t share the data. So even if one corner of an object has a number of faces that have the same vertex position, if they have different normals or texture co-ords, you have to copy that data a few times.

Back to the discussion:

At the moment, I’m thinking, the vertex array definition of shared data is this:

Data is shared if, and only if, the texture coord, normal and vertex is identical.

Which I’m thinking, is a bit on the chud side.

Could someone have a word with the ARB please…

In my mind I don’t like the idea of performing those extra calculations, but I guess I’ll give it a go, and see if it goes any slower…

Oh, I can tell you right now, with 90% certainty, that it is should be a bit faster. Seeing as how you are cutting down on the number of OpenGL function calls when using vertex arrays as compared to the immediate (glBegin/glEnd) approach. And it should be much faster if you need to draw a given array multiple times when using multiple pass rendering as long as you are using compiled vertex arrays. Though I have come to the conclusion that this is not always the case. Just remember that not all OpenGL ICD’s are created equal. Unexpected things can happen from one OpenGL ICD to another. What may be faster with one ICD may be slower with another, and with yet another, it may not work at all. This, I have learned after much hair pulling.

Yeah shared vertices, are the go… heres a scenareo where this helps…

You can tansform all the ones in the camera frustrum ( by backward rotate vector ), that means put the camera in world object space, and only translate the vertices in the camera frustrum… Of course this means reading the GL_MODELVIEW Matrix…

This allows us to only to move the vertices that are ONLY viewable… By using shared vertices, we save on less tranformations… Had we not used a data structure for sharing vertices… Like put them all in vertex arrays, we would have to translate more verticies ( i don’t know how much more, but it’s approx 3 times )…

Although the theory with vertex arrays look great… But it does pose a few problems with texture mapping, and lighting… However if they made vertex arrays something like this:

GLfloat verts[] = { … };
GLfloat tcoords[] = { … }; /* NOTE: tcoords will be a different size than verts /
GLuint vindices[] = { … }; /
Vertex indicies /
GLuint tindicies[] = { … };/
texture indicies */

glBindTexture( GL_TEXTURE_2D, ourTexture );
for( int t = 0; t < MAXINDICIES; t += 4 )
{
glTexCoordPointer( 2, GL_FLOAT, 0, &tcoords[ tindicies[ t ] ] );
glDrawElements( GL_QUADS, 4, GL_UNSIGNED_BYTE, &vindicies[ t ] );
}

The only snag with the above code, is that tindicies must be the same size as vindicies, or you’ll get Array out bounds errors…
The code most likely won’t work… but it would be good if it did… If it did work, then we could have a cube texturemapped with 8 verts, and 4 texcoords… hmm i wonder if it does work ?? or something like this perhaps ??

[This message has been edited by warlordQ (edited 08-01-2000).]

even better is this:

GLfloat verts[] = { … };
GLfloat tcoords[] = { … }; /* NOTE: tcoords will be a different size than verts /
GLuint vindices[] = { … }; /
Vertex indicies /
GLuint tindicies[] = { … };/
texture indicies */

glBindTexture( GL_TEXTURE_2D, ourTexture );
glTexCoordPointer( 2, GL_FLOAT, 0, &tcoords[ tindicies ] ); /* Dunno about this line */
glDrawElements( GL_QUADS, 4, GL_UNSIGNED_BYTE, vindicies );

If that works, it’ll be friggen great…