Vertices with different attributes and Array-objects

Hello,
I stumbled upon a problem: Let’s say I want to emit Vertices that define different sets of attributes to the renderer - what would be the appropriate way to do this?
Imagine a sequence of gl-commands like


// vertex0
glColor...
...
glNormal...
glVertex...   

// vertex1
glNormal...
glVertex...

// vertex2
glVertex...

Defining an array to me only seems possible for the vertex-attribute and could not be emitted via glDrawElements. To minimize the data-flow to the graphics adaptor this could be transformed to


/* Define a vertex array */
glEnable ... VERTEX_ARRAY

// vertex0
glColor...
glNormal...
glArrayElement...

//vertex1
glNormal
glArrayElement...

//vertex2
glArrayElement...

which would allow for reuse of the vertex-(point)-data via an index, but not for the additional attributes present in vertices 0 and 1. Of course arrays for the additional attributes could be defined but, as I read the documentation a construct like



//vertex0
glEnable.. VERTEX
glEnable.. COLOR
glEnable.. NORMAl
glArrayElement...

//vertex1
glDisable...COLOR
glArrayElement...

//vertex2
glDisable... NORMAL
glArrayElement

is not allowed. Am I missing the right functionality to accomplish emitting vertices with different sets of attributes to the renderer without having to send (possible identical data) over the data-bus more than once?

To be more precise: I have model-data stored in a compact form like this


/* Vertex-Attribute-Values are simple vector-structures (sequences of floats) */
{
  Vector0 = { x,y,z,w },
  Vector1...
}

/* Vertices are defined as a bitmask indicating the attributes present followed by indices into the vertex-attribut-value-array */
{
  POINT_BIT+NORMAL_BIT+COLOR_BIT, IndexOfNormalVectorX, IndexOfColor,...,
  POINT_BIT+NORMAL_BIT, IndexOfNormal, IndexOfVertex...
  POINT_BIT, IndexOfVertex
}

/* Triangles are defined by offsets into the vertex array */
{
  OffsetOfVertex0, OffsetOfVertex1, OffsetOfVertex2
}

The question that led to the question posted at the beginning was how to efficiently emit such data.

Thanks for ideas

Am I missing the right functionality to accomplish emitting vertices with different sets of attributes to the renderer without having to send (possible identical data) over the data-bus more than once?

How do you know that the immediate mode code you posted in the first example doesn’t send the “possibly identical data” across the bus?

If you want to maximize performance (which is the only reason why you should care about what gets sent “over the data-bus”), then what you want to do is put your vertex data in some buffer objects and render them as arrays with some form of glDrawElements/glDrawArrays. If that requires duplicating attribute data, so be it; it will be faster than your immediate mode code.

“How do you know that the immediate mode code you posted in the first example doesn’t send the “possibly identical data” across the bus?”
No, no. I’m pretty sure that data gets duplicated using immediate commands. That why I was seeking for another way to do it. Sorry, If i didn’t express myself clearly.

“If that requires duplicating attribute data, so be it”
That is the point of the question. Is there really no way to avoid that? That possibly does not only require duplicating attribute data but whole vertices and is hence likely to bloat memory usage. Forget the phrase about the data-bus - it’s just a reasoning for efficient data-storage and minimizing redundancy.
Not only that duplicating the data is second choice in hindsight of memory-efficiency: Filling up the omitted attributes is a quite nontrivial task as it would require information about the whole triangle in the concrete case, maybe even the whole model or scene if changes in the state of the vertex-attributes are to be carried along.

Is memory efficiency a particular concern for you? If not… who cares?

Forget the phrase about the data-bus - it’s just a reasoning for efficient data-storage and minimizing redundancy.

But that’s exactly my point: it’s not a reason for that. Memory and performance often have to be traded; to get one, you must lose another.

Given the appropriate hardware, you can separately index vertex data. But doing so will be far less efficient than using conventional techniques. So if you want memory, you have to give up performance. And if you want performance, you want to use vertex attributes with a single index, which means higher (theoretically, at least) memory consumption.

Filling up the omitted attributes is a quite nontrivial task as it would require information about the whole triangle in the concrete case, maybe even the whole model or scene if changes in the state of the vertex-attributes are to be carried along.

Are you making a Minecraft-clone? If not, then you’re going to find that most real-world models have very reasonable topology. Attribute duplication will be minimal. Remember: just about every polygonal 3D game you’ve ever played works this way.

“Is memory efficiency a particular concern for you? If not… who cares?”
I’m not sure. An observation from concrete data led me to this - given the fact that it used tiled textures which leads to a lot of repetitive texture-coordinates apart from shared point-coordinates.

“Given the appropriate hardware, you can separately index vertex data…”
I guess the mentioned performance-decrease results from using custom shaders instead of the fixed pipeline. Being able to use multi-indexing directly would be definately something I would appreciate to see in one of the forthcoming GL-versions as well as the ability to enable/disable arrays between begin and end.

“Are you making a Minecraft-clone?”
That would not be the last idea… :slight_smile: And in fact - as I know my skills as a 3d-Modeller unable/unwilling to afford a 3d-scanner to scan and edit old action-figures something in that direction seems realistic. :slight_smile:

About the minimal attribute duplication: I wouldn’t have thought so as normal-space and point-data seems commonly shared.

I guess I’ll simply use immediate mode until it gets too slow and optimize it by using arrays for the attributes-set common to all vertices and immediate mode for the additional attributes only.

Thanks

An observation from concrete data led me to this - given the fact that it used tiled textures which leads to a lot of repetitive texture-coordinates apart from shared point-coordinates.

What is a “tiled texture”?

I guess the mentioned performance-decrease results from using custom shaders instead of the fixed pipeline.

No, it results from using buffer textures to fetch data, rather than the typical vertex attribute system that is normally used. It also means that you have to do any decompression of the attributes yourself in shader logic.

Being able to use multi-indexing directly would be definately something I would appreciate to see in one of the forthcoming GL-versions as well as the ability to enable/disable arrays between begin and end.

I wouldn’t hold my breath. You do know that glBegin/glEnd were removed in GL 3.1 core, yes? Along with all rendering methods besides vertex data stored in buffer objects.

About the minimal attribute duplication: I wouldn’t have thought so as normal-space and point-data seems commonly shared.

They are. So are texture coordinates. And colors. I don’t see the problem.

It’s all about the topology of a mesh. And in most meshes, the topology is generally smooth.

‘What is a “tiled texture”?’
I mean a texture that can be repeated in any direction without producing visible cracks. So that two quads with the same texture coords can be put besides.

“No, it results from using buffer textures to fetch data”
Can I take this as a word that using the programable pipline is no slower than the fixed functionality because of it’s nature (eg when implementing the same functionality)? Just curios - I wouldn’t have asked this…

“No, it results from using buffer textures to fetch data, rather than the typical vertex attribute system that is normally used.”
Ah, you caught me in that matter - I didn’t really read the comment. But if using custom shaders then the part of the problem can be solved quite easily as the information if attributes are omitted or not can be exploited.

"I wouldn’t hold my breath. You do know that glBegin/glEnd were removed in GL 3.1 core, yes? Along with all rendering methods besides vertex data stored in buffer objects. "
Which makes it interesting to be able to emit such data as a particular attribute depending on the context. But I guess the functionality would already be in place would this really pose a problem.

Thanks for insight