VAO, IBO, VBO, vertex-attributes, uniform variables, program objects --- how workie?

I’ve been working on a 3D simulation/game engine for a while, and while it works, I still don’t fully understand all the relationships between:

[FONT=courier new]#1: VAO == vertex array object
#2: IBO == GL_ELEMENT_ARRAY_BUFFER object
#3: VBO == GL_ARRAY_BUFFER object
#4: vertex-attributes
#5: uniform variables
#6: shader objects
#7: program objects

And especially, when various actions need to be taken (when certain OpenGL functions need to be called).

I suppose I better briefly describe the nature of the engine to provide context.

The engine contains “batches”, each of which contains a VAO, IBO, VBO. Each batch contains objects that are “compatible” in the sense they are composed of the same primitives (points, lines or triangles/surfaces), their vertices are arrays of the same vertex structure (and thus all have identical attributes at identical byte offsets), all access the same images (texturemaps, surfacemans, conemaps, etc) from the same array textures and texture-units, all can be rendered with the same program object (and thus set of shaders).

Therefore, to draw everything for a frame just requires looping through all the batches and calling glDrawElements() once, passing in the number of indices in the IBO and a starting offset of zero.

Whenever a batch becomes full, or when an object is added that is composed of a different type of primitive, or has a different vertex layout, or is to be rendered by a different program object, or requires any image (texturemap, surfacemap, conemap, etc) that is not currently available in the currently active array textures… a new batch is created for that object and any future object that is compatible.

So far, all objects have the same vertex structure and other requirements stated in the previous paragraph… except for objects that expect to be rendered as points or lines, which naturally cause new batches to be created for them.

When a new batch is created, a new VAO, IBO, VBO is created and their OpenGL identifiers are saved in the batch structure for future reference. Also, the following code is executed to specify the byte-offsets of each vertex-attribute in every vertex in the VBO, and since the VAO, IBO, VBO are first made active by calling glBindVertexArray() and glBindBuffer()[FONT=palatino linotype][FONT=courier new] [/FONT][/FONT][FONT=palatino linotype]and[/FONT][FONT=palatino linotype][FONT=courier new] glBindBuffer() [/FONT] [/FONT](in that order), I assume the IBO and byte-offsets of all vertex-attributes are recorded properly in the VAO.

As an aside, later I plan to have fewer but huge (maximum size) VBO objects that contain vertices for many batches and thus many IBO. For some reason I didn’t originally realize the one-for-one mapping of IBO and VBO was unnecessary and also unwise. But for now, there is one VAO, IBO, VBO for each batch.


//
// get offset to each element in ig_vertex32 structure (vertex structure we put into VBO)
//
    u08* base = (u08*)&vertex;
    u08* offset0 = (u08*)((u08*)&vertex.position - (u08*)base);                         // 0x0000 to 0x000F ::: 4 * f32 == 3 * f32 position.xyz + 1 * f32 east.x
    u08* offset1 = (u08*)((u08*)&vertex.zenith - (u08*)base);                           // 0x0010 to 0x001F ::: 4 * f32 == 3 * f32 zenith.xyz + 1 * f32 east.y
    u08* offset2 = (u08*)((u08*)&vertex.north - (u08*)base);                            // 0x0020 to 0x002F ::: 4 * f32 == 3 * f32 north.xyz + 1 * f32 east.z
    u08* offset3 = (u08*)((u08*)&vertex.color - (u08*)base);                            // 0x0030 to 0x0033 ::: 4 * u08 == 4 * u08 color.rgba
    u08* offset4 = (u08*)((u08*)&vertex.tcoord - (u08*)base);                           // 0x0034 to 0x0037 ::: 2 * u16 == 2 * u16 tcoord.xy
    u08* offset5 = (u08*)((u08*)&vertex.mixmatsay - (u08*)base);                        // 0x0038 to 0x003B ::: 1 * u32 == 2 * u32 mixmatsay.xy
//
// define vertex attributes --- each corresponds to one component of the vertex structure we put into the VBO
//
    glVertexAttribPointer  (0, 4, GL_FLOAT, 0, vbytes, offset0);                        // 4 * f32 == position.xyz + east.x
    glVertexAttribPointer  (1, 4, GL_FLOAT, 0, vbytes, offset1);                        // 4 * f32 == zenith.xyz + east.y
    glVertexAttribPointer  (2, 4, GL_FLOAT, 0, vbytes, offset2);                        // 4 * f32 == north.xyz + east.z
    glVertexAttribPointer  (3, 4, GL_UNSIGNED_BYTE, 1, vbytes, offset3);                // 4 * u08 == color.rgba
    glVertexAttribPointer  (4, 2, GL_UNSIGNED_SHORT, 1, vbytes, offset4);               // 2 * u16 == tcoord.xy
    glVertexAttribIPointer (5, 2, GL_UNSIGNED_INT, vbytes, offset5);                    // 2 * u32 == mixmatsay.xy
//
// enable every vertex attribute ::: causes OpenGL to automatically transfer these vertex attributes from VBO to each vertex shader before vertex shader execution starts
//
    glEnableVertexAttribArray (0);                                                      // enable vertex position.xyz : east.x
    glEnableVertexAttribArray (1);                                                      // enable vertex zenith.xyz : east.y
    glEnableVertexAttribArray (2);                                                      // enable vertex north.xyz : east.z
    glEnableVertexAttribArray (3);                                                      // enable vertex color.rgba
    glEnableVertexAttribArray (4);                                                      // enable vertex tcoord.xy
    glEnableVertexAttribArray (5);                                                      // enable vertex mixmatsay.xy

    batch->index_position        = glGetAttribLocation (glprogram, "ig_position");      // position.xyz + east.x                ::: in vertex shader    == layout (location = 0) in  vec4 ig_position;      // vertex position.xyz ::: position.w contains east.x
    batch->index_zenith          = glGetAttribLocation (glprogram, "ig_zenith");        // zenith.xyz + east.y                  ::: in vertex shader    == layout (location = 1) in  vec4 ig_zenith;        // vertex zenith.xyz vector AKA normal vector ::: zenith.w contains east.y
    batch->index_north           = glGetAttribLocation (glprogram, "ig_north");         // north.xyz + east.z                   ::: in vertex shader    == layout (location = 2) in  vec4 ig_north;         // vertex north.xyz vector AKA bitangent vector ::: north.w contains east.z
    batch->index_color           = glGetAttribLocation (glprogram, "ig_color");         // color.rgba                           ::: in vertex shader    == layout (location = 3) in  vec4 ig_color;         // vertex color.rgba
    batch->index_tcoord          = glGetAttribLocation (glprogram, "ig_tcoord");        // tcoord.xy                            ::: in vertex shader    == layout (location = 4) in  vec4 ig_tcoord;        // vertex tcoord.xy
    batch->index_mixmatsay       = glGetAttribLocation (glprogram, "ig_mixmatsay");     // mixmatsay == mixid, tmatid, saybit   ::: in vertex shader    == layout (location = 5) in ivec2 ig_mixmatsay;     // vertex mixmatsay.xy ::: mixmatsay.x == tmapid, smapid, cmapid, xmapid : mixmatsay.y = tmapid, saybit
//
// should the slots of uniform variables be specified here... or somewhere else?
//
    batch->index_transform      = glGetUniformLocation (glprogram, "ig_transform");     // uniform variable == transform matrix (modelviewprojection) - xxxxx
    batch->index_clight0        = glGetUniformLocation (glprogram, "ig_clight0");       // uniform variable == light #0 color
    batch->index_clight1        = glGetUniformLocation (glprogram, "ig_clight1");       // uniform variable == light #1 color
    batch->index_clight2        = glGetUniformLocation (glprogram, "ig_clight2");       // uniform variable == light #2 color
    batch->index_clight3        = glGetUniformLocation (glprogram, "ig_clight3");       // uniform variable == light #3 color
    batch->index_plight0        = glGetUniformLocation (glprogram, "ig_plight0");       // uniform variable == light #0 position
    batch->index_plight1        = glGetUniformLocation (glprogram, "ig_plight1");       // uniform variable == light #1 position
    batch->index_plight2        = glGetUniformLocation (glprogram, "ig_plight2");       // uniform variable == light #2 position
    batch->index_plight3        = glGetUniformLocation (glprogram, "ig_plight3");       // uniform variable == light #3 position
    batch->index_pcamera        = glGetUniformLocation (glprogram, "ig_pcamera");       // uniform variable == camera position == active camera
    batch->index_tmap           = glGetUniformLocation (glprogram, "ig_tmap");          // texture-map #0 == texture-unit #0 : texture maps
    batch->index_smap           = glGetUniformLocation (glprogram, "ig_smap");          // texture-map #1 == texture-unit #1 : surface maps
    batch->index_cmap           = glGetUniformLocation (glprogram, "ig_cmap");          // texture-map #2 == texture-unit #2 : cone maps
    batch->index_xmap           = glGetUniformLocation (glprogram, "ig_xmap");          // texture-map #3 == texture-unit #3 : x maps (unknown maps)

More and more of the IBO and VBO are filled with glBufferSubData() as new shape objects are created and their indices and vertices are appended to the IBO and VBO assigned to that batch.

QUESTION: I assume that [maybe] I need to record the vertex-attribute “layout locations” in the batch structure as done in the code snippet above. But is that necessary? Will making the VAO of a given batch active by calling glBindVertexArray() do everything necessary inform OpenGL of the vertex-attribute byte offsets and vertex-attribute “layout locations” in the future… when batches are drawn?

NOTE: Standard practice for my shaders is to fully specify layout (location == index) for every vertex-attribute and every uniform variable/vector/matrix. Therefore, whenever any program object is made active by calling glUseProgram(), OpenGL has to know the locations of every vertex-attribute and every uniform variable/vector/matrix without my code specifying any locations. In fact, my engine never specifies any locations, it only queries and saves the locations of vertex-attrivutes and uniform variables/vectors/matrices from OpenGL to save in its various structures (batch structures and program structures).

When the batches are drawn, code like the following is executed:


    u32 vaoid = batch->vaoid;
    u32 iboid = batch->iboid;
    u32 vboid = batch->vboid;
    u32 ptype = batch->primitive;
    u32 icount = batch->ielementn;
//
// bind VAO
//
    if (vaoid != glstate.active_vao) {
        glBindVertexArray (vaoid);
        glstate.active_vao = vaoid;
    }
//
// bind IBO
//
    if (iboid != glstate.active_ibo) {
        glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, iboid);
        glstate.active_ibo = iboid;
    }
//
// bind VBO
//
    if (vboid != glstate.active_vbo) {
        glBindBuffer (GL_ARRAY_BUFFER, vboid);
        glstate.active_vbo = vboid;
    }

    switch (ptype) {
        case IG_PRIMITIVE_TRIANGLE:    gmode = GL_TRIANGLES; break;
        case IG_PRIMITIVE_LINE:        gmode = GL_LINES; break;
        case IG_PRIMITIVE_POINT:       gmode = GL_POINTS; break;
        default:                       gmode = GL_TRIANGLES; break;
    }

    if (icount) {
        glDrawElements (gmode, icount, GL_UNSIGNED_INT, 0);      // 32-bit indices only
    }

NOTE: As a matter of general policy, in every case before the engine changes any OpenGL state, it checks to see whether the same state is already set, and if so, skips calling the OpenGL function to set that state. Hence that strange code you see above where any OpenGL state is set.

Elsewhere, when a program object is created, the following code is executed in ig_program_create() where program objects are created:


//
// set OpenGL "program object" in IG "program object" structure
//
    program->objid_vshader      = objid_vshader;                                         // objid of vertex shader
    program->objid_tcshader     = objid_tcshader;                                        // objid of tesselate_control shader
    program->objid_teshader     = objid_teshader;                                        // objid of tesselate_evaluate shader
    program->objid_gshader      = objid_gshader;                                         // objid of geometry shader
    program->objid_fshader      = objid_fshader;                                         // objid of fragment shader
    program->objid_yshader      = objid_yshader;                                         // objid of y shader
    program->objid_zshader      = objid_zshader;                                         // objid of z shader
    program->objid_cshader      = objid_cshader;                                         // objid of compute shader
    glerror = glGetError();
//
// all the following slots for vertex-attributes and uniform variables are known for the program object because this information is explicitly specified in the shaders with "layout" and "location" syntax
//   - note that program objects do not care about byte offsets of the various vertex-attributes within the vertex structure because each attribute is delivered to the shaders in a specific 16-byte "location"
//     - therefore, for example, even though color.rgba only consumes 32-bits in the vertex (4 * u08 RGBA elements), color is converted by OpenGL to (4 * f32 RGBA elements) and consumes all of "location 3" AKA "slot 3"
//     - therefore, for example, even though tcoord.xy only consumes 32-bits in the vertex (2 * u16 x,y elements), tcoord.xy is converted by OpenGL to (4 * f32 xy00 elements) and consumes all of "location 4" AKA "slot 4"
//
    program->index_position     = glGetAttribLocation  (glprogram, "ig_position");       // position.xyz + east.x                ::: in vertex shader    == layout (location =  0) in  vec4 ig_position;     // vertex position.xyz ::: position.w contains east.x
    program->index_zenith       = glGetAttribLocation  (glprogram, "ig_zenith");         // zenith.xyz + east.y                  ::: in vertex shader    == layout (location =  1) in  vec4 ig_zenith;       // vertex zenith.xyz vector AKA normal vector ::: zenith.w contains east.y
    program->index_north        = glGetAttribLocation  (glprogram, "ig_north");          // north.xyz + east.z                   ::: in vertex shader    == layout (location =  2) in  vec4 ig_north;        // vertex north.xyz vector AKA bitangent vector ::: north.w contains east.z
    program->index_color        = glGetAttribLocation  (glprogram, "ig_color");          // color.rgba                           ::: in vertex shader    == layout (location =  3) in  vec4 ig_color;        // vertex color.rgba
    program->index_tcoord       = glGetAttribLocation  (glprogram, "ig_tcoord");         // tcoord.xy                            ::: in vertex shader    == layout (location =  4) in  vec4 ig_tcoord;       // vertex tcoord.xy
    program->index_mixmatsay    = glGetAttribLocation  (glprogram, "ig_mixmatsay");      // mixmatsay == mixid, tmatid, saybit   ::: in vertex shader    == layout (location =  5) in ivec2 ig_mixmatsay;    // vertex mixmatsay.xy ::: mixmatsay.x == tmapid, smapid, cmapid, xmapid : mixmatsay.y = tmatid, saybit
    glerror = glGetError();
//
// all the following slots for uniform variables are known for the program object because this information is explicitly specified in the shaders with "layout" and "location" syntax
//   - every uniform variable or vector consumes exactly one 16-byte "location" AKA "slot"
//   - every uniform matrix consumes exactly four 16-byte "locations" or "slots"
//
    program->index_transform    = glGetUniformLocation (glprogram, "ig_transform");      // uniform variable == transform matrix (modelviewprojection)    == layout (location =  0) uniform mat4 ig_traansform
    program->index_clight0      = glGetUniformLocation (glprogram, "ig_clight0");        // uniform variable == light #0 color                            == layout (location =  4) uniform vec4 ig_clight0
    program->index_clight1      = glGetUniformLocation (glprogram, "ig_clight1");        // uniform variable == light #1 color                            == layout (location =  4) uniform vec4 ig_clight1
    program->index_clight2      = glGetUniformLocation (glprogram, "ig_clight2");        // uniform variable == light #2 color                            == layout (location =  4) uniform vec4 ig_clight2
    program->index_clight3      = glGetUniformLocation (glprogram, "ig_clight3");        // uniform variable == light #3 color                            == layout (location =  4) uniform vec4 ig_clight3
    program->index_plight0      = glGetUniformLocation (glprogram, "ig_plight0");        // uniform variable == light #0 position                         == layout (location =  4) uniform vec4 ig_plight0
    program->index_plight1      = glGetUniformLocation (glprogram, "ig_plight1");        // uniform variable == light #1 position                         == layout (location =  4) uniform vec4 ig_plight1
    program->index_plight2      = glGetUniformLocation (glprogram, "ig_plight2");        // uniform variable == light #2 position                         == layout (location =  4) uniform vec4 ig_plight2
    program->index_plight3      = glGetUniformLocation (glprogram, "ig_plight3");        // uniform variable == light #3 position                         == layout (location =  4) uniform vec4 ig_plight3
    program->index_pcamera      = glGetUniformLocation (glprogram, "ig_pcamera");        // uniform variable == camera position == active camera          == layout (location =  4) uniform vec4 ig_pcamera
    program->index_tmap         = glGetUniformLocation (glprogram, "ig_tmap");           // texture-map #0 == texture-unit #0 : texture maps              == layout (location =  4) uniform vec4 ig_tmap
    program->index_smap         = glGetUniformLocation (glprogram, "ig_smap");           // texture-map #1 == texture-unit #1 : surface maps              == layout (location =  4) uniform vec4 ig_smap
    program->index_cmap         = glGetUniformLocation (glprogram, "ig_cmap");           // texture-map #2 == texture-unit #2 : cone maps                 == layout (location =  4) uniform vec4 ig_cmap
    program->index_xmap         = glGetUniformLocation (glprogram, "ig_xmap");           // texture-map #3 == texture-unit #3 : x maps ?                  == layout (location =  4) uniform vec4 ig_xmap
    glerror = glGetError();
//
// put OpenGL program object identifier
//
    program->glprogram = glprogram;                        // OpenGL "program object" identifier

QUESTION: This code captures the objid of the shaders in this program, the layout (location = n) of the vertex-attributes, and the layout (location = n) of the uniform variables/vectors/matrices in the program object structure so data can be written to the appropriate [16-byte] “location” in the default uniform block (and later into a UBO). But as with much of this topic, I’m not sure where this information should be captured, how much needs to be saved, and when these saved values might be needed for some reason. My only though at the moment is the following. When the engine needs to write variables, vectors or matrices into a UBO, presumably the engine should grab the value from one of these program->index_xxxxxx variables in the program object structure, write data to that location with one of the glUniform*() functions… or multiply the location by 16-bytes to know where to write into a CPU memory buffer that will later be written to a UBO in the GPU. But… I have a difficult time keeping this straight.

Perhaps the real question should be… when is any of this information needed? What needs to be set up before any OpenGL function is called?

Let me just babble and bit, make some guesses, and correct me when I’m wrong. Or just provide a nice clean statement of what needs to be done when.

***** create batch == create VAO, IBO, VBO *****
- create and bind VAO, IBO, VBO

  • specify vertex-attribute byte-offsets by calling glVertexAttribPointer() or glVertexAttribIPointer()
  • enable vertex-attributes by calling glEnableVertexAttribArray() for each vertex-attribute == location
  • capture and save in batch object structure the layout location of each vertex-attribute by calling glGetAttribLocation()
  • capture and save in batch object structure the layout location of each uniform variable/vector/matrix by calling glGetUniformLocation()

That’s’ what the code does, but thinking about this makes the last item look stupid because the uniform variables don’t really have much of anything directly to do with the batch. And besides, presumably the only way glGetUniformLocation() can return values at all is because some OpenGL program object was previously made active by calling glUseProgram(). But does the program object necessarily have anything to do with any given batch? Doesn’t seem so… off hand at least. So maybe storing the layout location of the uniform variables during batch create is stupid.

***** write indices/elements into IBO *****

  • bind the IBO if not already active
  • write new or updated indices/elements into IBO by calling glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, … )
  • no need to bind associated VAO ???
  • no need to bind associated VBO ???

***** write vertices into[FONT=courier new] VBO *****

  • bind the VBO if not already active
  • write new or updated vertices into VBO by calling glBufferSubData(GL_VERTEX_ARRAY_BUFFER, … )
  • no need to bind associated VAO ???
  • no need to bind associated IBO ???

***** draw batch of shape objects *****

  • bind the VAO created to serve this batch
  • bind the IBO created to serve this batch — is this necessary ???
  • bind the VBO created to serve this batch — is this necessary ???
  • update conents of array-textures on texture-units 0, 1, 2, 3… 29, 30, 31 as needed
    [FONT=palatino linotype] - make active the OpenGL program object appropriate to draw this batch of objects
  • set any uniform values this program accesses that may have changed since the last draw
  • call[FONT=courier new] glDrawElements() to draw this batch of shape objects
    [/FONT][/FONT]

What else?

What else happens that has anything to do with vertex-attributes?

What else happens that has anything to do with any uniform value or a UBO?

Frankly, I’m not even sure the above is the correct way to ask these questions. Hopefully the above at least says enough to spur someone to reply with a simple and concise statement of what needs to be done and when (before something else is done). It really can’t be as complicated as it seems.

QUESTION: Maybe VAO and program objects operate utterly and totally independently. Is that correct? Does anything in OpenGL keep VAOs, vertices and vertex-attributes consistent with the active program? Can a VAO have a totally different number (and types, and significance) of vertex-attributes than the active program expects?

Anyway, if anyone really understands all these [somewhat] related topics and how they fit together… please try to explain so a dummy like me can comprehend.
[/FONT][/FONT]

I understand all of these topics. But your explanation of what you’re trying to do demonstrates a very great deal of confusion on these matters. This confusion is coupled with the difficult of reading your post, since you’ve elected to use a different font from the usual one for this forum. But in any case, what matters is that you have enough misunderstanding about these topics that the only way I can correct you is by just explaining everything.

And I’ve already done that once.

I understand that these are complex topics. But your post would require multiple tutorials worth of correction.

However, I can offer some advice:

Standard practice for my shaders is to fully specify layout (location == index) for every vertex-attribute and every uniform variable/vector/matrix

If that’s the case, then there is absolutely no reason for you to be querying the locations in your OpenGL code. There’s no point in calling glGetAttrib/UniformLocation; they should be hard-coded into your system.

Why? Well, you’re already hard-coding the names of those variables, right? So the writer of the shader knows that they have to use “ix_position” as the name of the position input variable, or “ig_transform” as the name of a transform matrix uniform. Otherwise, their shaders won’t work with your engine.

So, since the writer of the shader has to conform to some standard to be able to be used by your shader, why not make the standard based on location indices instead of names? Instead of making the shader writer use “ix_position”, require instead that they use layout(location = 0). That way, they can name their variable whatever they like, and you don’t have to query the locations in OpenGL code. You know that location 0 means position; you don’t have to call glGetAttribLocation on some name to fetch it.

There’s still a standard that they must write to. But the standard is not defined by names, but by integers.

  • create and bind VAO, IBO, VBO
  • specify vertex-attribute byte-offsets by calling glVertexAttribPointer() or glVertexAttribIPointer()

Above, you said that “standard practice” was to specify uniform locations with “layout(location = X)”. Well, specifying uniform locations in the shader is actually kind of new; it’s from OpenGL 4.3/ARB_explicit_uniform_location.

And OpenGL 4.3 also includes the separate attribute format syntax. So if you can access one, then you probably can access the other.

At which point, you no longer have to use any of the glVertexAttrib*Pointer functions. You can instead use the much better glVertexAttrib*Format/glBindVertexBuffer APIs. You will find that these make a lot more sense than the *Pointer functions.

At which point, there is no need to create the index or vertex buffers anywhere in relation to the vertex array object. You can set up the vertex format information as part of your batch (the glVertexAttrib*Format/glVertexAttribBinding calls), then hook it into the specific buffers you want to use at render time (glBindVertexBuffer/glBindBuffer(GL_ELEMENT_ARRAY_BUFFER)).

Indeed, you don’t even need a VAO to create the index buffer. Buffer objects are not linked to the target you bind them to. So you can create a buffer and bind it to any target you like just to put data into it. You can later bind it to GL_ELEMENT_ARRAY_BUFFER when it comes time to render.

You do not need to create buffers anywhere near your VAOs.

Oh, and one more thing:

And besides, presumably the only way glGetUniformLocation() can return values at all is because some OpenGL program object was previously made active by calling glUseProgram().

glGetUniformLocation takes a program object as a parameter. So no, it has nothing to do with the current program.

I very much appreciate your help. To even ask the right questions is difficult. I read the page you linked to twice. It helped me understand certain issues better, but leaves me not understanding much too. Part of the problem… with pretty much everything I’ve ever read about OpenGL… is that the text says “bind this” and “bind that”… but most times don’t say what things are being bound to, nor what are the consequences of binding here or there… or when. This seems to be a firm habit of everyone who knows OpenGL well, but probably drives everyone trying to learn about OpenGL crazy (and keeps their brains fuzzy, and prevents them from understanding what’s going on).

To explain a little. I didn’t always have the layout location syntax in my shaders, but when I learned about that option I liked it and adopted the practice. Some of the code in the engine pre-dated that change, but I left in the code for purposes of debugging (to make sure everything was aligning up the way I expected), to go between #ifdef DEBUG and #endif someday.

In short, yes I do prefer to completely isolate the engine from the shaders to the extent possible. Basically, just specify what vertex attributes (of what datatype) appears in each of the 16-byte layout location slots in the vertex-shader… and basically the same for uniforms. To the extent possible, to ignore attribute [and even uniform variable] names is fine with me.

It seemed like the second half of your webpage (on separate attribute format) was more-or-less discussing a way to do this. Unfortunately, even after reading several times, I didn’t really understand what is going on. So, just when I’m almost getting to understand “the old bad way” that I have now… it seems that the “new good way” that seemingly does what I want is more-or-less beyond my comprehension (because I don’t know what’s going on behind the scenes, or why). Maybe I can fiddle around and make the code work (like I did before), but I hate being confused.

As you surely know by now, I’m doing everything I can think of to put “compatible objects” into the same batch so I can render them all by calling glDrawElements() once (where “compatible objects” are objects that can be drawn by the same set of active textures and the same shader programs).

To make this approach possible and practical and highly effective, the following was done:

#1: The vertex-attributes include two u32 integers that contain various bits and fields:

  • 24-bits == tmatid == objid == index into array of transformation matrices
  • 08-bits == saybits == bits that enable/disable various features
  • 08-bits == tmapid == texturemap (index into array texture)
  • 08-bits == smapid == surfacemap (index into array texture)
  • 08-bits == cmapid == conemap (index into array texture)
  • 08-bits == xmapid == xmap (index into array texture)

The saybits mostly determine how lighting is performed:

  • one bit says “make vertex color.rgba attribute contribute to pixel color”
  • one bit says “make texturemap color contribute to pixel color”
  • one bit says “make surfacemap contribute to pixel color” (normal mapping)
  • one bit says “make conemap contribute to pixel color” (parallax mapping)
  • one bit says “make xmap contribute to pixel color” (specular mapping)
  • and so forth (and some bits not yet defined)

One purpose of this uber-shader approach is the same… to assure the same shader can render most objects. But also, since all these options are available on a triangle by triangle basis (from the provoking vertex of each triangle), different parts of objects can apply different textures, different surfacemaps, different conemaps, different specularmaps and so forth. In the best (rarely practical) case, every part of every object can be rendered by a single call to one glDrawElements() function.

As a matter of policy (and habit), I “design ahead”. In this case this means that I’m perfectly happy if only the high-end GPUs of 2018~2020 can run this engine. However, that has to include both AMD and nvidia GPUs… which makes me wonder whether I can adopt bindless textures or not (which seem vastly better than array textures for my purposes).

I haven’t made the following change yet, but soon I intend to replace all the VBO in the engine (which is now one VBO per batch) with one huge, permanent, persistent VBO. Obviously I still need to have separate batches… even in the best case… because objects composed of points must be rendered by a separate draw call and objects composed of lines must be rendered by a separate draw call. So at minimum 3 batches (and thus VAO and IBO) are necessary (unless no point and/or no line primitives exist at all). But now I see no reason whatsoever that all vertices cannot reside in a single VBO. Which will be one less state to change, and apparently other gains are made by making that VBO “persistent” via glBufferStorage(). Of course I don’t know what I’m talking about here… but that’s what I think I read somewhere, so I intend to give this a try.

You say “the engine doesn’t need to create buffers anywhere near the VAO”. That’s one thing that confuses me a bit. Let’s say the engine creates and binds that one huge VBO to GL_ARRAY_BUFFER binding point (if that even exists outside of VAO circa OpenGL v4.6 core). Then later, each time the engine creates a new batch with its new VAO and IBO and sets vertex-attributes… does the engine need to bind 0 to GL_ARRAY_BUFFER then bind the one huge VBO to GL_ARRAY_BUFFER again — to make the VAO record the appropriate VBO? Or does the fact the VBO was already bound to GL_ARRAY_BUFFER do the trick (assuming that is even possible without a bound VAO in OpenGL v4.6 core)?

As an aside, here are a few comments to help you undestand what I’m trying to do… at least vaguely. The engine tries to provide advanced, sophisticated capabilities to writers of simulations and/or realistic games (with an initial emphasis on games that take place in outer space). For example, as default the engine automatically performs collision detection and collision response (kinetic physics) on all objects (unless purposely disabled), automatically applies gravitational forces to all objects based upon the masses and distances to all other objects (that are close enough and massive enough to have measurable influence), supports various kinds of force and torque generators (thrusters and such), and so forth.

The hope is, the application writer can get create some spiffy simulations or games without going through all the hell the engine had to go through.

Oh, right. I forgot to mention that one fundamental focus of the engine is “procedurally generated content”. While ultimately we hope this includes “everything”, at the beginning the most significant application of this is the creation and assembly of physical objects from simpler objects from simpler objects and ultimately from the set of fundamental shapes we provide (which can be arbitrarily and substantially scaled, sheared, twisted, tweaked, modified and configured during and after creation). Also the procedural generation of texturemaps or surfacemaps which can then be added to the set of available textures and then applied to any surface… or else real procedural coloring or tweaking of any existing surface in object-local space or world-coordinate space. Lots more procedural generation will come later.

I have a billion-plus star catalog that I compiled for various purposes, which includes spectral types (colors), parallax (distances), proper-motions and more for every star. I’m in the process of figuring out how to automatically render totally awesome star backgrounds based on this catalog (from any point in the galaxy given the catalog includes [often estimated and approximate but realistic] distances (and therefore 3D locations in the galaxy). The faintest stars are as faint as huge telescopes can see, so even when the cameras are zoomed to 100x or more… the star fields will be precise (and quite useful as a tool for locating objects through telescopes at amateur and professional observatories). This is actually a very difficult problem that I still haven’t completely solved… and I’ll probably hassle you and others sometime soon to help me figure out some of my remaining problems with this. Hint: I organized the database into regions in which each region is precisely that area of the sky covered by one pixel in a cube map with 1024x1024 faces. One problem is… for a random camera orientation, which regions need to be rendered? I’m okay at some math, but problems like this give me nightmares.

Bottom line: Gads, I wish I could just deal with the engine-specific features, which are plenty numerous and difficult enough to drive me nuts and give me nightmares.

Thanks for your help. Any additional confusions you can clear up will be appreciated.

PS: Your explanation of how the VAO works (in conventional application) was the clearest I’ve seen and gets me close to understanding that much. Too bad the second half of your writeup about glVertexAttrib*Format() and glVertexAttribBinding() convinces me I really should adopt the other unconventional approach, and puts me back close to zero again (in the new paradigm).

PS: My two computers are Linux Mint v18.1 with Ryzen and Threadripper CPUs and GTX 1080TI GPUs. So my hardware is probably up to snuff, though sometimes I don’t update my drivers beyond what Linux Mint is willing to install for me with its “driver manager” application.

The Wiki article on OpenGL Objects tries to explain the concept in general terms. I try to explain it with a source code analogy here.

All separate formats are is a way to separate what an attribute’s data looks like from where it comes from in memory.

Think of the stuff glVertexAttribFormat sets to be like a type in C/C++. float*, int*, vec3*, uint16_vec3*, etc. It says that the attribute X in the shader will be filled in by data of type Y. It’s like a typedef.

glBindVertexBuffer is the equivalent of defining a void*: a pointer to a location in memory. There are a fix number of such “pointers” that a VAO can store.

glVertexAttribBinding says “the type defined for attribute X will get its data from pointer Z”.

So consider the following:


typedef float Attrib0; //Equivalent to glVertexAttribFormat(0, 1, GL_FLOAT, GL_FALSE, 0);
void *ptr_array[16]; //Total number of such pointers in a VAO.
ptr_array[5] = some_pointer; //Equivalent to glBindVertexBuffer(5, ...);
static_cast<Attrib0>(ptr_array[5]); //Equivalent to glVertexAttribBinding(0, 5);

That’s how separate attribute format works. glVertexAttribPointer set the typedef and did the pointer cast. Separate attribute formats makes them separate steps.

They’re all still VAO data. But it allows you to set the buffers without also setting the format. That is, you can change ptr_array[5] without redefining Attrib0 and so forth (the analogy breaks down, since the cast is automatically updated, but you get the idea).

I can’t explain everything about everything, but I’ll try to muddle through this paragraph:

The last time functionality was removed from OpenGL was in 3.1. And that will almost certainly be the last time, so there’s no need to worry about whether something “even exists”. If it was part of the core profile post-GL 3.1, then it will always be so.

That being said, if you are strictly writing your code against OpenGL 4.6, you have access to Direct State Access functions. So there’s no need to bind the buffer to any target in order to create it and fill it with data.


GLuint buffer;
glCreateBuffers(&buffer, 1);
glNamedBufferStorage(buffer, ...);

But nevermind that now:

There is no need to rebind an object. OpenGL will not unbind objects for you (unless you delete them).

That being said, it’s generally not wise to just assume that an object will be forever bound to a binding point. Oh sure, your code will work… right up until you accidentally change that binding point. Then you’re screwed.

This is why modern programing paradigms avoid mutable, global state. They prefer passing parameters or having object members.

That being said again, it’s still best to use DSA to handle this:


glVertexArrayVertexBuffer(vao, buffer, offset);

That way, you never ever have to use GL_ARRAY_BUFFER ever again.

I’m glad I waited overnight to reply to your message. I tossed and turned for hours because I was dreaming about these topics as my brain tried to make sense of everything. Sure enough, that seemed to work. At least now I more-or-less understand the relationship between the new set of functions and the old set of functions (which I was just finally coming to understand).

Part of the problem was that bindingindex business. That made no sense to me. Reading OpenGL SuperBible did not make this any clearer… it pretty much just said the word “binding” and moved on from there. This is one of my gripes about how authors talk about OpenGL. They’ve been trained to believe “bind” is a magic word, and if they say “bind” then all novices will automatically understand what that means, what the object or information is being bound to, and the significance of that. And if they don’t understand, then obviously this is obviously a mental defect in the reader… because this is how all OpenGL authors talk! So shape up, all you mentally defective OpenGL non-experts (especially me) !!!

I don’t mean to be annoying, but that’s how it feels (and I do believe that’s what is going on).

In this case, after my sleeping brain grappled with the whole “new way” versus “old way” issue for a few hours… it finally said “hey, this binding is NOTHING”… or more exactly, “this binding has no specific meaning… the binding points are just small numbers like the 16-byte vertex attribute locations, except unlike locations they literally have no meaning whatsoever”. Essentially they are just another way to identify a specific VBO buffer. So if we designate VBO buffer object with an OpenGL identifier/name of 3 as “binding = 0” in a VAO… then from then on we can associate all attributes with “binding #0” to say “this is the VBO to get this attribute from”. In the “old way” the VAO held the OpenGL object identifier for the VBO each attribute was fetched from. Now the VAO holds the “binding number” of the VBO buffer each attribute will be fetched from.

I have no idea why this extra level of indirection is important or helpful, but… the same result seems to be accomplished.

I wonder whether all vertex attributes default to the VBO associated with binding == 0. If so, then code only needs to specify the binding of one VBO is “binding == 0” and then automatically all attributes will be fetched from that VBO buffer. That would be nice. Unfortunately, the example code I’ve seen contains a whole bunch of function calls to associate all the attributes to binding == 0[SIZE=3].

Oops! Now I realize I’m confused again! By the very last line of your message in fact. What does that do? I mean… what binding does that VBO buffer have when you call that glVertexArrayVertexBuffer(vao, vbo, offset) function? What attributes will be fetched from that VBO buffer?

Hmmm. Let me think. Oh, wait! What you wrote isn’t the complete set of arguments for glVertexArrayVertexBuffer(). Whew! Hahaha. More like…

glVertexArrayVertexBuffer (vao, bindingindex, vbo, offset, stride);

So this function does the following:
- vao ::: specifies the VAO this VBO is being put into as a [potential] source of vertex attributes.
- bindingindex ::: specifies binding index of this VBO, an arbitrary meaningless tiny integer to identify this VBO (instead of VBO name).
- vbo ::: specifies a specific VBO buffer object via the OpenGL object name/identifier.
- offset ::: offset into VBO where the first values for this attribute are stored.
- stride ::: bytes between successive values of this attribute.

The most important point is… this function does specify the bindingindex of this VBO, which eliminates my confusion. Whew!

The offset argument is interesting. I can see how that would be useful to SoA lovers (which definitely does not include me) who want to jam all the attributes into one VBO… though I am immediately confused as to whether that means that byte offset is considered equivalent to the first byte in a VBO (so the value of the first index/element would be zero to refer-to the that attribute for the first vertex), or the index/element would still need to contain the location of that attribute in the entire VBO). Since my engine will have one humongous VBO that holds all vertices for all objects, plus several IBO == one for each batch (all object that can be drawn with a single draw call)… I’m not sure whether that offset argument can make my life easier or not. In my current implementation the IBO refers-to vertices in the VBO starting at the very first byte in the VBO. Off hand I don’t see how this offset can help me. The structure for every object contains two numbers that are the offset in vertices and bytes from the first vertex and byte in the VBO to the first vertex in the VBO for the object. And so, it is trivial to compute the values to put in the IBO for any object, no matter where that object is located in the VBO.

The stride argument is easy. I guess the only difference is… zero is not valid. No problem, since sizeof(ig_vertex32) is easy code to write.

And so… hopefully I do have this straight [enough] now. I’ll find out when I replace my current code with “the new better way”, then click “run”.[/SIZE]

Essentially they are just another way to identify a specific VBO buffer.

This is what a VAO looks like:


struct VertexFormat
{
  bool enabled;
  AttributeType attrib_type; //Integer, float, or double.
  bool normalized;
  DataType type; //The type of the data in memory.
  uint num_components;
  uint offset;
};

struct BufferBinding
{
  uint buffer_object;
  intptr_t offset;
  sizei stride;
  uint divisor;
};

struct VertexArrayObject
{
  BufferBinding bindings[16];
  VertexFormat attributes[16];
  uint bindingIndex[16];
};

For each array element in attributes, the corresponding bindingIndex says which bindings element a particular attribute gets its data from. So for attribute[2], the BufferBinding it uses to get its data is bindings[bindingIndex[2]]

And as you can see from the data structure, BufferBinding contains more than just a buffer object. It contains the byte offset to start reading that buffer object from, the stride for indexing the array, and the divisor if you want to do instanced rendering.

So no, a "bindingIndex` is not “just another way to identify a specific VBO buffer”.

You can put the same buffer into two separate binding indices, with different offsets. That represents reading from different parts of the buffer. Maybe your data is arranged as an array of positions followed by an array of colors. That is, you’re not doing interleaving. To do that, you use two separate bindings, since VertexFormat::offset can’t be big enough to skip the entire array of positions to get to the correct color.

I wonder whether all vertex attributes default to the VBO associated with binding == 0. If so, then code only needs to specify the binding of one VBO is “binding == 0” and then automatically all attributes will be fetched from that VBO buffer. That would be nice. Unfortunately, the example code I’ve seen contains a whole bunch of function calls to associate all the attributes to binding == 0.

First question: who cares? It’s just a bunch of glVertexAttribBinding(X, 0) calls. It’s not important.

Second, no, the default bindingIndex value is the attribute index it corresponds to. So by default, the binding for attribute 0 is 0, the binding for attribute 5 is 5, etc.

But again, who cares? Just set the state you use.

Oops! Now I realize I’m confused again! By the very last line of your message in fact. What does that do? I mean… what binding does that VBO buffer have when you call that glVertexArrayVertexBuffer(vao, vbo, offset) function? What attributes will be fetched from that VBO buffer?

Sorry about that. My only real point was that there was a DSA alternative to glBindVertexBuffer.

Off hand I don’t see how this offset can help me.

It doesn’t have to. Not everything in OpenGL is to help you personally. Other people need to be able to adjust the starting offset, since their buffers may contain different kinds of data of different formats in different places.

If you don’t need it, set it to zero.