A better VertexArray.

Hi!

I am working on a project that uses a lot of static data.
As I use Texturing and LIGHTS - in my case VertexArrays are
out of the question. So I was forced to use Display lists.

And that is not good enough for me - then my project is
geomety limited. I was trying to fix that, and have
run out of options. But something else comes in mind …


Also we have …
glArrayElement(Index);

but I need something like …
glArrayElement(Index,Op);
where OP indicates element type, like …
>vertex
Effect: glVertex{2|3|4}f(Data[Index],Data[Index+1], … );
>normal
Effect: glNormal3f(Data[Index],Data[Index+1],Data[Index+1]);
>texcoord
Effect: glTexCoord2f(Data[Index],Data[Index+1]);

so is it possible to cut calculations down to a absolute minimum.
AND this would minimize memory usage (travelsal/chace hits?).
This method can be used by all VertexArray commands.
glDrawElements - for example, would have indices like …
0. = (texcoord1,normal1,vertex1)

  1. = (texcoord2,normal1,vertex2)
  2. = (texcoord3,normal1,vertex3)
  3. = (texcoord2,normal2,vertex4)
  4. = (texcoord3,normal2,vertex1)

NB! Vertex may have a different count of variables as
texcoord (solution - array of floats).

This would simplify even light calculations (normals).

Once again - let say we have a cube (how typical ).
And we want Tex, Normal and Vertex (again).

With Display list we can collect all 6 sides and
their 24 vertexec (cube itself has 8 vertexec).
If evry side has the same texture (same texcoord) -
resource wasting is obvious. Same with VertexArray.

The situation will be better if object is round, but I think
method shown before would be perfect - or what?

Any comments, questions?

<CS>

>Any comments, questions?

I’ve got a couple.

You say that in your application you were forced to use display lists because you used textures and lights. Why is that? Vertex arrays can be used with texturing and lighting just fine.

Also, it sounds to me like your proposed glArrayElement(Index,Op) is the same as a glVertexfv, glNormalfv, or glTexCoordfv. All three of these calls pass in all the data at once.

Eg. glVertex3fv(Data[Index]);
Is the same as
glVertex3f(Data[Index], Data[Index + 1], Data[Index + 2]);

Thanks for reply.

> You say that in your application you were forced to use display lists
> because you used textures and lights. Why is that? Vertex arrays can
> be used with texturing and lighting just fine.

  • Because lights use normals. - They will be differtent for evry face
    of a cube - this means - with VertexArray i need to define
    one vertex 3 times! But with Display list i can avoid redef… normals -
    so, display list IS better as VertexArray in my case. But i still have
    to recalculate 1 vertex 3 times - that, I think, can be avoid.

> Also, it sounds to me like your proposed glArrayElement(Index,Op) is
> the same as a glVertexfv, glNormalfv, or glTexCoordfv.

That is not true. If I use a index that was used before,
GL does not need to calculate it again. For example:

glBegin…
glVertex3f(Data[0],Data[1],Data[2]);
glVertex3f(Data[0],Data[1],Data[2]);
glEnd;

will calculate the exactly same vertex twice …

glBegin…
glArrayElement(0,vertex);
glArrayElement(0,vertex);
glEnd;

only once.

>All three of these calls pass in all the data at once.

And that is a problem (in current VertexArray)! Let say
we have a vertex with 2 surfaces connected to it. As Normals
for them are different i must define this vertex as
much times as there is surfaces (with different normals)
connected to it.

> glVertex3fv(Data[Index]) Is the same as
> glVertex3f(Data[Index], Data[Index + 1], Data[Index + 2]);

Yeah, but i was trying to say that
Data needs to be a array of floats (or int?).
glVertex3f(Data[Index],Data[Index + 1], Data[Index + 2]);
therefore seems to be easier to understand, but Yours will
do the same.

You may ask why must Data be a Array of Floats !?
I’ll explane that now … (cube as a example )

A cube will have 8 different vertexec, 6 different normals
and (let say) 12 TexCoords. How to put them in ONE array?
If we use glVertexPointer this will be not a problem.

– NB! ---------------------------------
With VertexArray (current one) we use 24(V), 24(N) and 24(T) !
With Display list we use 24(V), 6(N) and x(T) {where x<=24(T)}.
With a better VertexArray - 8(V), 6(N) and less of (T) as with DisplayList.

Waiting for response …

Regards
<CS>

The stupid message thing deleted half my post!

Ok, to say again what I lost:

  • Because lights use normals. - They will be different for evry face
    of a cube - this means - with VertexArray i need to define
    one vertex 3 times! But with Display list i can avoid redef… normals -
    so, display list IS better as VertexArray in my case. But i still have
    to recalculate 1 vertex 3 times - that, I think, can be avoid.

This will save you vertex storage space and bandwidth for sending the data, but the amount of calculations for the vertices still remains the same, because there are 24 different combinations of vertex position and normal that you use, and OpenGL interprets this as 24 different vertices. Thus, it recalculates the lighting 24 times, so you don’t really save anything in terms of processing.

> Also, it sounds to me like your proposed glArrayElement(Index,Op) is
> the same as a glVertexfv, glNormalfv, or glTexCoordfv.

That is not true. If I use a index that was used before,
GL does not need to calculate it again. For example:

code:

glBegin… glVertex3f(Data[0],Data[1],Data[2]); glVertex3f(Data[0],Data[1],Data[2]);glEnd;


will calculate the exactly same vertex twice …

code:

glBegin… glArrayElement(0,vertex); glArrayElement(0,vertex);glEnd;


only once.

Unfortunately, it isn’t quite as simple as that. Using your proposed function, you might draw a cube like this…

glBegin(..);
   glArrayElement(0, *normal*);
   glArrayElement(0, *vertex*);
   ...continue drawing side 1
   glArrayElement(1, *normal*);
   glArrayElement(0, *vertex*); //reuses vertice 0 with different normal
   ..finish drawing cube
glEnd();

(I edited this code to make it more clear)

However, this wouldn’t reuse the vertex!
Why? Because if you are using lighting(which you are because you are bothering to define normals), the actual data sent is the vertex position AND the normal. OpenGL needs to light it again, because it is not EXACTLY the same as the one you did before, and it will be lit differently. So in this case, the only savings you get are smaller storage space and a little bit less function call overhead. In which case, it’s just the same as using glNormalv and glVertexv.

Incidentally, glNormalv, glTexCoordv, and glVertexv can be used with any type of data that glNormal, glTexCoord*, and glVertex* can be used with. All you do is give the function a pointer to the first value in the vertex.

j

[This message has been edited by j (edited 11-28-2000).]

Hi!


> /…/ but the amount of calculations for the vertices still remains
> the same, because there are 24 different combinations of vertex position
> and normal that you use, and OpenGL interprets this as 24 different
> vertices …
Hm. Sure about that?
> … Thus, it recalculates the lighting 24 times, so you don’t really
> save anything in terms of processing.
Huh!? That makes me wonder … according to my logic it is so …
glVertex - we calculate 3D vertex to 2D
if we have lighting we use current normal (where ever this comes from)
glNormal - we do calculations for lighting. So, it seems to me that …
[a] vertex lockation in 3D/2D space remains the same - no need to calculate
[b] distance will be the same (used only in light calculations)
[c] (just thinking) glNormal needs matrix calculations too - can we
| reuse the result on other normals with same direction.
| (not the light calculations itself)
… does OGL it somehow else?
NB! I can’t look at documentation (i am at work), so maybe i am wronq.
– Point of view ------------------------------------------------
At my knowledge, glNormal and glTexCoord are state values so, glVertex
is the only working horse here. (I mean, glNormal only extends glVertex).


> Unfortunately, it isn’t quite as simple as that. Using your proposed
> function, you might draw a cube like this…
> /…/ However, this wouldn’t reuse the vertex!
> Why? Because if you are using lighting, the actual data sent is the vertex
> position AND the normal. OpenGL needs to light it again, because it
> is not EXACTLY the same as the one you did before, and it will be
> lit differently.

Are You pointing to a hardware problem (seems to me )?
OGL is not hardware independent - so, if we add some functionality that
hardware does not support - it’s hardware manufacturers problem to fix that
(in my case it would be to late to me anyway). I am only pointing to
better possibilities (so, yes, i still belive it’s better).


> /…/ So in this case, the only savings you get are smaller
> storage space …
In fact I HAVE a space problem (and this is the one method how i save it).
> … and a little bit less function call overhead. In which
> case, it’s just the same as using glNormalv and glVertexv.
So sayd - yes (and currently i am using it in my display lists).


NB! Something more to think about …
Let say, we want a (want it to be) round object - normals would be
not a problem, but texCoord will. I mean, it is impossible to make a
perfect (dictionary has not a fair word for that) surface+spread (!?).
For example sets:

  1. Vertex (texCoord1,Normal1,Vertex1)
  2. Vertex (texCoord2,Normal1,Vertex1)

Waiting for response …

<CS>

Huh!? That makes me wonder … according to my logic it is so …
glVertex - we calculate 3D vertex to 2D
if we have lighting we use current normal (where ever this comes from)
glNormal - we do calculations for lighting.So, it seems to me that …
[a] vertex lockation in 3D/2D space remains the same - no need to calculate
[b] distance will be the same (used only in light calculations)
[c] (just thinking) glNormal needs matrix calculations too - can we
| reuse the result on other normals with same direction.
| (not the light calculations itself)

It’s true that if you save the vertex and normals from after they are transformed, you won’t need to transform them again, but you will need to do the lighting calculations. If you are using a point light, the position of the vertex is used in the lighting calculations to get the vector from the point to the light. This means that if you are using lighting and you change your vertex position but keep the same normal, you still need to redo the light calculation. Same thing if you keep the same vertex but change the normal.

What your scheme would require is several information caches… One for the vertex information, one for normals, one for texcoords, and so on. These caches would hold post-transform information on vertices and normals you already have passed in. If you pass in a normal or vertex that is already in the cache, OpenGL would realize this and get it from the cache. Basically for each normal, vertex, or texture coordinate that is found in the cache, you save the transform costs. However, because lighting depends on the normal AND the position, it will need to redo the lighting calculations. In a standard vertex array, the normal and vertex associated with an index are always the same, so OpenGL knows that the lighting will be the same, and it does not need to redo the lighting (unless the modelview matrix has changed).

Unfortunately, implementing a cache for each of the different attributes of a vertex would be tough. For example, if you are using hardware T&L, the cache would have to be stored in the main CPU memory, because the cache for the graphics processing unit on the video card is not designed to hold information in that way. That means that to get the cached information, the graphics processing unit needs to get the information from the main memory, which is exactly the same thing as if you sent all the information yourself (the bus between the graphics processing unit and system ram is relatively slow). Not to mention that you would have to do all the transforms in software, because it is very difficult (impossible?) to get post-transform data from the GPU. This would actually slow down the pipeline a lot, because you would be doing software T&L, and wasting the power of your video card. With hardware T&L, the cost of sending a vertex that is already transformed and the cost of sending an untransformed vertex is the same. On a system with hardware T&L, your method would slow it down.

In an implementation without hardware T&L, your method might help. You would save yourself some transformation calculations, but on the other hand, you would need to check the cache of stored information every time you specified a vertex or normal, so that could slow you down.

Basically, your method would not work very well at all with hardware T&L, and might work somewhat well with software T&L. Pretty much every video card made after this year will have hardware T&L, so I don’t know how useful this functionality would be.

I could be wrong about this, but to the best of my knowledge, that’s what the results of your new type of vertex array would be. If you want a better opinion on this, you could ask matt. He builds drivers for NVIDIA, and probably knows more than anybody else on this message board about this kind of stuff.

NB! Something more to think about …
Let say, we want a (want it to be) round object - normals would be
not a problem, but texCoord will. I mean, it is impossible to make a
perfect (dictionary has not a fair word for that) surface+spread (!?).
For example sets:

  1. Vertex (texCoord1,Normal1,Vertex1)
  2. Vertex (texCoord2,Normal1,Vertex1)

I don’t get what you are trying to say here. Is there any way you could clarify?

j

Hi again!


> With hardware T&L, the cost of sending a vertex that is already
> transformed and the cost of sending an untransformed vertex is the same.
Hm. That is questionable. I think OGL will send only vertexec that
was not transformed before and (GPU holds the result, if possible).
If there is a need to reuse a vertex - OGL will send a index instead
of (useless) vertex data (again - only if GPU supports VertexArray).


> In an implementation without hardware T&L, your method might help. You
> would save yourself some transformation calculations, but on the other
> hand, you would need to check the cache of stored information every
> time you specified a vertex or normal, so that could slow you down.
Why should this slow it down? I don’t understand your line of
reasoning here. So i present how I think it could(should) be …
As first we have a array for (cube) data
V1x,V1y,V1z,N1x,N1y,N1z,V2x,V2y,V2z … - (F)irst array
where V1x means - first Vertex x cordinate (and so on). Now we call
glDrawElements that will use this array. OGL creates a list (at CPU
ram as (S)econd array) for flags for evry element in (F) array.
If OGL needs to calculate a new indexed vertex, it will look
at array (S) to be sure there is a need to calculate it.
If it was not calculated before - it will be done now (if possible

  • by GPU).

BUT, I fear that current hardware is not able to support it.


> Pretty much every video card made after this year will have hardware
> T&L, so I don’t know how useful this functionality would be.
Yes - this is the big question here. I think hardware manufacturers
are spending most of his time to widen current bottlenecks and sooner
or later it will be geometry.


> I don’t get what you are trying to say here. Is there any way you could clarify?
Ill try …
Let say; i have 1 vertex with 2 surfeces connected to it (same Normal)
and different texCoords - i must define (and calculate) 2 vertexec.

Hope that clears something …

<CS>

Something i forgot


I could be wrong about this, but to the best of my knowledge, that’s what the results of your new type of vertex array would be. If you want a better opinion on this, you could ask matt. He builds drivers for NVIDIA, and probably knows more than anybody else on this message board about this kind of stuff.

I am not able to connect him - can You guide him here ?

<CS>

> With hardware T&L, the cost of sending a vertex that is already
> transformed and the cost of sending an untransformed vertex is the same.

Sorry, I wasn’t very clear with this.

What I meant to say was that if you send a vertex to the GPU when there is a modelview transform, and then send another vertex when the modelview transform is set to the identity matrix, the GPU will process them at almost exactly the same speed.

Hm. That is questionable. I think OGL will send only vertexec that
was not transformed before and (GPU holds the result, if possible).
If there is a need to reuse a vertex - OGL will send a index instead
of (useless) vertex data (again - only if GPU supports VertexArray).

The problem with this is that then OpenGL needs to know which vertices the GPU has in cache. With current hardware, you can’t do this. Even if you could, you would have a lot of communication between the GPU and the system memory, which once again slows things down.

> In an implementation without hardware T&L, your method might help. You
> would save yourself some transformation calculations, but on the other
> hand, you would need to check the cache of stored information every
> time you specified a vertex or normal, so that could slow you down.
Why should this slow it down? I don’t understand your line of
reasoning here. So i present how I think it could(should) be …
As first we have a array for (cube) data
V1x,V1y,V1z,N1x,N1y,N1z,V2x,V2y,V2z … - (F)irst array
where V1x means - first Vertex x cordinate (and so on). Now we call
glDrawElements that will use this array. OGL creates a list (at CPU
ram as (S)econd array) for flags for evry element in (F) array.
If OGL needs to calculate a new indexed vertex, it will look
at array (S) to be sure there is a need to calculate it.
If it was not calculated before - it will be done now (if possible

  • by GPU).

What I was saying is that since you would have a cache, each time you specified a vertex or a normal, you would have to go through the cache. If you have a small cache, it would be faster than calculating each vertex. If you have a large cache, the time to find out if the index is already calculated might be longer than the time to simply calculate the vertex anyway.
If you have a small cache, though, you will not be able to reuse the vertices very much, and the benefits of your method would be lost, because you would always need to be respecifying vertices and normals instead of getting them from the cache.

Let say; i have 1 vertex with 2 surfeces connected to it (same Normal)
and different texCoords - i must define (and calculate) 2 vertexec.

I see what you are saying now.

It’s basically the same thing as re-using vertices and normals, except with texture coordinates, so most of what I’ve said so far applies.

The way I see it, your new type of vertex arrays can save memory, but I think it would be very hard to reduce vertex processing with them. The cases where it would work best would be where there are a lot of vertices that share positions, but have different normals and texture coordinates. This mainly occurs in models that have a lot of flat planes and sharp edges. Because of the nature of these types of models, they generally do not use a lot of polygons to draw, so the benefit of your idea is not that high. If you are drawing smooth models with sharp edges in certain places, the amount of vertices that benefit from your scheme is low compared to the number that don’t (which have shared vertex, normal, and texture coordinates) because they can use normal vertex arrays, which are easily accelerated with the GPU.

I’ve posted a message in the advanced message board asking for matt’s opinion on this.

I am working on a project that uses a lot of static data.
As I use Texturing and LIGHTS - in my case VertexArrays are
out of the question. So I was forced to use Display lists.

And that is not good enough for me - then my project is
geomety limited. I was trying to fix that, and have
run out of options. But something else comes in mind …

By the way, for the application you mentioned in your first post, how many vertices are you drawing? Is your performance unacceptable? It may be that we can figure out a way to speed up your program using existing OpenGL functionality.

j

[This message has been edited by j (edited 11-30-2000).]

It’s essentially impossible to save any vertex processing when you start mixing vertex attributes.

For lighting, for example, both the vertex and normal matter, so if either one is different, the calculations must be redone.

There is generally also no data transfer savings from doing this.

So it’s unlikely an extension like this will ever happen. The example everyone cites is a cube, but a cube has so few vertices… it just doesn’t matter.

  • Matt

Wow, that was fast!

30 seconds after I ask, I have a reply.

Thanks!

j

Hi.

There are stil some things i do not get …
but i leave them (i give up ).

By the way, for the application you mentioned in your first post, how many vertices are you drawing? Is your performance unacceptable? It may be that we can figure out a way to speed up your program using existing OpenGL functionality.

~50000V (~15000V if i could reuse some of them)

At the time i start this forum performance was unacceptable. Currently it is acceptable (due of good vis check (takes ~5% of time ) + less detail {i’ll guess ~20000V - not sure about that}).

Now i try to use as less lights as possible to gain some detail back - i think it is the best way to go.

–Comment------------------------------
After hours of coding i have found the bug http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/001388.html
it turns out to be a total cache fault inflicted by to big textures.

Time to sleep …

<CS> zZZzz.