Enhanced vertex arrays

Just would like to know if and when enhanced vertex arrays will appear. Mainly about VERTEX_INDEX, NORMAL_INDEX…

I don’t think this will ever happen, at least not soon…

Think about it, with real geometry you won’t save much memory from duplicated vertices, and on the other hand you need much more memory for the index array.

And performance would be much worse because the memory access gets even more random than with simple indices…

On the paper I have (quiete old) aoubt GL 2.0 specs (and memory consumption), they spoke about it, the interface was defined… So I guessed it would happen some day. So this wasn’t just an idea :slight_smile:

I know that the index arrays will have some memory consumtion but they won’t be as huge as the vertex arrays, normal arrays… Also, it would help the driver to take profit from the cache, which can be a good thing.

That’s what I think.

I didn’t know about this being in the old GL 2.0 paper. I still doubt it would be useful with real world data…

You’ll still need vertex and normal data in your arrays, so you won’t gain that much space, only for a few vertices that share the same position and have different normals or something similar.

On the other hand, you have double index data.

And about the cache, I highly doubt there is any profit, the driver can only cache whole vertices, because it doesn’t how the various input parameters are combined by the vertex shader.

Or are you referring to simple memory caching of the input data? In that case, cache performance will most likely be worse because of more random access and generally increased data volume due to the index data…

Or are you referring to simple memory caching of the input data? In that case, cache performance will most likely be worse because of more random access and generally increased data volume due to the index data…
Indices are not cached. Indices are transformed into “real” pointers into memory by the pre-T&L vertex processor. So you don’t get any worse cache performance from “increased data volume due to the index data”.

And, of course, that presupposes that the total vertex data is larger than the other way. I would suspect that, if it were larger, then the preprocessing step that every developer uses on their meshes would do it the smaller way. On the occurances where you get a size-win from this way, it is used. And then, there’s the following.

Second, cache performance overall can be better. This is because duplicated data is fetched directly from the cache. If you have one vertex position that is used for 3 vertices, you only ever access it once. Under the old paradigm, you had 3 separate vertices that were in 3 separate memory locations, so they had to be transfered from memory 3 times.

So, in the case of duplicate data, there’s almost always a performance win. And when it doesn’t win, it’s a tie.

I have many geometries that share vertices but normals (and also in some ways texture coordinates) are different. So using enhanced arrays will de facto help using less memory and as Korval stipples, will also help using some caches (which one(s) ? I don’t know…) just because there will have less ‘repetitions’ in the arrays, so a vertex, a normal or a texcoord will have more chance to be still in the graphic memory cache and reused for the next primitives renderings than with using plain full vertex arrays (or maybe with arranging the array so that the ‘repetiting’ vertices stand very near from each other in the array: but is for me almost not ‘doable’). There are also other situations in which that would help a lot. [hope that’s understandable]

I don’t know where the ARB actually is regarding this feature, I only know there’s a lot of things to do before them to be available (mainly memory management… I don’t have the paper here so I can’t be more precise).
The papers I refer to are about 4 or 5 years old and spoke of many interresting things including other programmable pipelines… So if someone knows where I could get a chance to know if that will be implemented in future releases of GL, please tell me.

Notes: I actually wasn’t able to find the papers on this site.

Korval, you’re contradicting things you’ve said in previous posts.

this was discusses pretty well here
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=012713

Thanks, this was the discussion I was searching for :wink:

jwatte mentioned some tests with real world meshes there that resulted in multiple index arrays needing more memory than single index arrays.

Of course it is easy to construct an artificial example where you save memory with multiple indices. But with higher polygon and vertex counts, it gets more and more likely that multiple indices require more memory than saved by reducing duplicate vertices.

Thanks for the link, I read it quiete quickly and might read it more deeply soon.

What Jwatte said is interresting, but he also used normalized arrays (which seem to be smaller than non-normalized arrays, don’t they ?) to make a comparison with multiple indexed arrays. I guess that the difference in memory consumption between them really depends on the geometry you have: if many shared vertices also have the same normals and the same texture coordinates (and color values), using multiple indexed arrays won’t really be a good choice. But if, in an array (whether it is a vertex array, a normal array… or in all of them) many of data are repeating threw the array, I guess one can expect some memory gain. That was about memory issues.

In fact, the main topic of that thread is: will and when that feature will be implemented ?

I didn’t want to fall into a discussion about the goods and bads. Those kind of arrays, but the issues of memory consumption, the effectiveness of use of pre/post t&l caches, have interrest for me.
But I don’t want to roam here.

And as none of you know if and when they will appear, this probably means that, as you stippled Overmind, it will not be there in a near future.

Korval, actually you’re not contradicting yourself. I apologise.

Couldn’t it be done by using the programmable pipe? After all, indices could just be stored as generic vertex attributes and looked up using Vertex Texture Fetch (obviously not with today’s hw). Or not? Would there be a real difference?

I am not really sure about the performance improvement. We’re trading some free memory for a much more difficult access pattern.

Of course it could be done, even with todays hardware, but it would be extremely slow.

Perhaps this will change in two or three generations, when more applications for vertex texture access appear and the hardware improves. It would be a logical followup to the current trend of abandoning fixed functions in favor of programmable shaders.

It also remains to be seen if this feature is really necessary, or just redundant with geometry shaders.

But that’s all very speculative right now :wink:

Originally posted by Overmind:
Of course it could be done, even with todays hardware, but it would be extremely slow.

Well, this is just what I meant.
Anyway, my personal opinion is that if it can be done using shaders then it’s unlikely it’ll make it to core API.

Anyway, my personal opinion is that if it can be done using shaders then it’s unlikely it’ll make it to core API.
Nonsense.

There is literally no purpose in doing it unless the hardware supports it. Having the shader use texture lookups (slow) negates all benifits of using the technique at all. This is a performance technique (for the most part); the alternative shader method is slower than not using the method at all, thus making it useless.

Assuming on future hardware there will still be a noticable performance penalty for using vertex textures.

That’s what I meant with “very speculative”, it was not meant as a concrete feature suggestion, just as a possible outlook to a few (or perhaps a bit more) years from now :stuck_out_tongue: