About GL_ELEMENT_ARRAY_BUFFER and multiple indexes

So I am a GL beginner but have just barely become aware that immediate mode is evil :D. I am currently using vertex buffer objects instead and they are working fine: I upload my data from an std::vector then delete the vector and keep the handle.

However I am not using indexed arrays. I want to, but there is a trouble. I can write out .obj files from blender and it will be easy for me to write code to parse them. But they appear to have a separate index space for verts, normals, and txcoords, while GL shares the same index space for all.

Am I missing a feature of GL, or is there a big mismatch here? As it stands all I know how to do is take apart the .obj file index format into a non-indexed format in software and forgo indexed rendering. But I’d like to have a more direct coupling from the blender object to GL if it is possible, plus it should be more efficient.

What is the general wisdom here?

thanks!

I’m somewhat surprised that Blender doesn’t output a better model format than that, one that does the proper indexing for you. But in general, what you need to do is collapse all of the indices for each vertex into a single index and render them that way.

Well, depends on the definition of “better” I guess :). Certainly blender’s approach seems more efficient, because it replicates each type of data the minimum possible.

It might have some options to export a more GL friendly format - I will investigate that. Mostly I was interested in whether GL even can consume data with a different set of indices for verts/norms/txcoords.

OpenGL does not have indexing per attribute. I added a feature request in October 2009:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=265744

If you follow the thread you will find some workaround with the current capabilities of OpenGL by “Ilian Dinev”.

You are not the first one who would be pleased by a “indexing per attribute” feature. Here are 9 other threads about this issue since October 2009:

2010-06-10
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=278923

2010-06-10
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=278928

2010-06-01
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=278477

2010-04-12
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=275770

2010-03-12
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=273786

2010-03-08
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=273508

2010-01-11
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=270066

2009-11-01
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=266470

2009-10-28
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=266261

Overlay: wow! :slight_smile:

I like GL but I do think that certain things seem harder than they really ought to be.

Thanks for the links!

Certainly blender’s approach seems more efficient, because it replicates each type of data the minimum possible.

And it simultaneously adds one extra index per attribute per vertex consumed. For any real model, where there is fairly little replication anyway (ie: it is smooth), the extra indices will make it take up more space overall. So in general, it buys you nothing.

It’s called a “misfeature”.

There’s lots of models that have replicated data, buildings + building contents, eg. doors + radiators/furniture tend to have lots of flat surfaces that are parallel to each other, which will lead to lots of common normals + shared triangle vertices. You could have 6 normals for up/down/left/right/front/back or hundreds/thousands of replicated normals if each vertex has it’s own normal.
If you have to form unique vertices, it also means you have to duplicate the position data, so position data will take up more space.

You might have a door that could be modelled with a grid (where each vertex is surrounded by 6 triangles) which could have:
120 position (12034)
14 normals (1434)
Assuming you were using GL_TRIANGLES, then you would have 720 triangle-vertices in this grid.
position indices: 720 * 1
normals indices: 720 * 1
total = 3048 bytes

But when you force each vertex to be unique, every vertex will now need to replicate the positions as well as the normals, so you would have:

~240 positions (each position is used by 6 triangles, approximately 3 of these share same normal too so can be merged into one vertex) (24034)
~240 normals (240 * 3 * 4)
720 * 1-byte indices (720 * 1), but if the model were slightly larger, would need to use 2-byte indices since we can no longer identify every piece of data with a single byte

total = 6480 bytes

That’s over twice the size. Even if the indices in the separate indices case used 2 bytes, that would still only be 4488 bytes vs 7200 bytes, the model without separate indices is still much larger, because of the extra space required in duplicating each position as well as each normal. With the larger number of vertices, you will also need to start using larger index types sooner.

If separate indices were introduced to OpenGL, there could be caching of values per-index, so if it’s just accessed the normal with index 0, then it could store the same value + re-use it if the next vertex uses the same value.

It could even store the calculated values that only depend on this attribute + uniforms, which might save on some matrix-multiplications for example if you had

n = normalize(NormalMatrix * normal);

then this computed value could be cached. This would mean you might want to group all triangles with the same normal next to each other in the mesh.

Of course it will only be useful for certain types of models, but that’s why there’s different drawing routines etc.

Yes, I agree with Dan Bartlett. In fact, looking at my models, that very thing is the case: there are many more unique positions than unique normals.

But the point is also that without a separate index space, my only choice is to de-index the whole model entirely, which is certainly bigger!

14 normals (1434)

Are you still passing normals as floats? And you’re complaining about the space efficiency of indexed rendering? At a bare minimum, normals can be shorts, if not bytes.

Maybe you should use the optimizations you have first, and then worry about “optimizations” you don’t have.

You might have a door that could be modelled with a grid (where each vertex is surrounded by 6 triangles) which could have

Unless you need this door to break, having such a door with 240+ individual triangles is needlessly wasteful. By “such a door”, I don’t mean a decorative door where those triangles actually have raised positions or other actual details (which naturally would be smooth). I mean the kind of door where you would legitimately have only 14 individual normals.

What, are you doing vertex lighting? In 2010?

This example is typical of the standard kinds of examples that come up in complaints about the lack of this feature. It is deliberately artificial, and would never be of value in actual rendering circumstances.

but if the model were slightly larger, would need to use 2-byte indices since we can no longer identify every piece of data with a single byte

The same 2-byte indices would be needed in the multiple indexed case as well. And in that case, you get double the cost.