Difference between revisions of "Vertex Specification"

From OpenGL.org
Jump to: navigation, search
(Instancing)
m (Non-array attribute values)
(38 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
{{pipeline float}}
 
'''Vertex Specification''' is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.
 
'''Vertex Specification''' is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.
  
Line 40: Line 41:
  
 
=== Primitives ===
 
=== Primitives ===
{{main|Primitives}}
+
{{main|Primitive}}
  
 
The above stream is not enough to actually get anything; you must tell OpenGL how to interpret this stream in order to get proper rendering. And this means telling OpenGL what kind of primitive to interpret the stream as.
 
The above stream is not enough to actually get anything; you must tell OpenGL how to interpret this stream in order to get proper rendering. And this means telling OpenGL what kind of primitive to interpret the stream as.
Line 48: Line 49:
 
The main article on this subject has the details.
 
The main article on this subject has the details.
  
== Building the Stream ==
+
Now that we understand the theory, let's look at how it is implemented in OpenGL. Vertex data is provided to OpenGL as arrays. Thus, OpenGL needs two things: the arrays themselves and a description of how to interpret the bytes of those arrays.
  
Making a vertex stream in OpenGL requires using two kinds of objects: [[Vertex Array Objects]] (VAO) and [[Vertex Buffer Objects]] (VBO). VBOs store the actual vertex and index arrays, while VAOs store the settings for interpreting the data in those arrays.
+
== Vertex Array Object ==
 +
{{infobox feature
 +
| name = Vertex Array Object
 +
| core = 3.0
 +
| core_extension = [http://www.opengl.org/registry/specs/ARB/vertex_array_object.txt ARB_vertex_array_object]
 +
}}
  
The first step is to create a VAO and bind it to the context.
+
A '''Vertex Array Object''' (VAO) is an [[OpenGL Object]] that encapsulates all of the state needed to specify vertex data (with one minor exception noted below). They define the format of the vertex data as well as the sources for the vertex arrays. Note that VAOs do not ''contain'' the arrays themselves; the arrays are stored in [[Buffer Object]]s ([[Vertex Buffer Object|see below]]). The VAOs simply reference already existing buffer objects.
  
=== Vertex Format ===
+
As [[OpenGL Object]]s, VAOs have the usual creation, destruction, and binding functions: {{apifunc|glGenVertexArrays}}, {{apifunc|glDeleteVertexArrays}}, and {{apifunc|glBindVertexArray}}. The latter is different, in that there is no "target" parameter; there is only one target for VAOs, and {{apifunc|glBindVertexArray}} binds to that target.
  
Each attribute in the VAO has its own binding point with its own parameters. In the [[Vertex Array Objects]] article, we used this pseudocode to explain the state that goes into an attribute binding:
+
{{note|VAOs cannot be shared between OpenGL contexts.}}
  
<syntaxhighlight lang="cpp">
+
Vertex attributes are numbered from 0 to {{enum|GL_MAX_VERTEX_ATTRIBS}} - 1. Each attribute array can be enabled for array access or disabled. When an attribute array is disabled, any attempts by the vertex shader to read from that attribute will produce a constant value ([[#Non-array attribute values|see below]]) instead of a value pulled from an array.
struct VertexAttribute
+
{
+
    bool            bIsEnabled          = GL_FALSE;
+
    int              iSize              = 4; //This is the number of elements in this attribute, 1-4.
+
    unsigned int    iStride            = 0;
+
    VertexAttribType eType              = GL_FLOAT;
+
    bool            bIsNormalized      = GL_FALSE;
+
    bool            bIsIntegral        = GL_FALSE;
+
    void *          pBufferObjectOffset = 0;
+
    BufferObject *  pBufferObj          = 0;
+
};
+
</syntaxhighlight>
+
  
Recall that this binding data is set by {{code|glVertexAttribPointer;}} or {{code|glVertexAttribIPointer}}.
+
A newly-created VAO has all of the arrays disabled. Arrays are enabled by binding the VAO in question and calling:
  
Similarly to how [[Pixel Transfer|pixel transfer operations]] have an internal format (the format of the image data in the texture or framebuffer) and an external format (the format of the image data in client memory or a buffer object), each attribute has an actual format and the format of the data you are passing.
+
void {{apifunc|glEnableVertexAttribArray}}(GLuint {{param|index}});
  
The actual format is defined by the shading language. So the internal attribute format changes depending on which shader you use the VAO with. Every vertex shader has an expected list of attributes. Each attribute has a particular dimensionality and expected type. The type can be float or integral.
+
There is a similar {{apifunc|glDisableVertexAttribArray}} function to disable an enabled array.
  
For example, if you define an attribute in [[GLSL]] as an {{code|ivec3}}, this means that the dimensionality of the attribute is 3 and the type is integral.
+
Remember: all of the state below is part of the VAO's state (except where explicitly stated that it is not). Thus, all of the state below is captured by the VAO.
  
OpenGL is quite flexible in conversions between the data in your buffer objects and what the attribute expects. OpenGL can convert any data from your attribute format to the destination attribute format as long as it has the correct type. If the shader attribute is of integral type, you must use {{code|glVertexAttribIPointer}} to attach the attribute data. The same goes for floating-point attributes and {{code|glVertexAttribPointer}}.
+
== Vertex Buffer Object ==
  
Other than this, OpenGL will make conversions as necessary. Normalized integers are converted on the expected range. Non-normalized integers given to floating-point attributes are converted to floating point values.
+
A '''Vertex Buffer Object''' (VBO) is a [[Buffer Object]] which is used as the source for vertex array data. It is no different from any other buffer object, and a buffer object used for [[Transform Feedback]] or [[Pixel Buffer Object|asynchronous pixel transfers]] can be used as source values for vertex arrays.
  
If there is a mismatch between the incoming dimensionality and the attribute's dimensionality, OpenGL will satisfy it. If the vertex data has more components than the shader attribute uses, the extra components are ignored. If the vertex data specifies fewer than the shader attribute uses, then the unfilled-in components are 0, except for the 4th component which is set to 1.
+
The format and source buffer for an attribute array can be set by doing the following. First, the buffer that the attribute comes from must be bound to {{enum|GL_ARRAY_BUFFER}}.
  
Your options for specifying your vertex format are varied.
+
{{note|The {{enum|GL_ARRAY_BUFFER}} binding is '''''NOT''''' part of the VAO's state! I know that's confusing, but that's the way it is.}}
  
The {{code|iSize}} value is the dimensionality of the attribute you are sending. It is an integer from 1-4.
+
Once the buffer is bound, call one of these functions:
  
{{note|The size value you send with {{code|glVertexAttribPointer}} can also be GL_BGRA, which is equivalent to 4, but specifies that the order of the components in the buffer object is switched from the expected RGBA. This value is only allowed in {{code|glVertexAttribPointer}}. It requires that you use normalization and only use unsigned byte types, and it will force both of these. This feature is not very flexible, because it was primarily introduced for Direct3D compatibility reasons; you probably are better off not using it unless you're trying to be data compatible with D3D.}}
+
  void {{apifunc|glVertexAttribPointer}}( GLuint {{param|index}}, GLint {{param|size}}, GLenum {{param|type}},
 +
    GLboolean {{param|normalized}}, GLsizei {{param|stride}}, const void *{{param|offset}});
 +
  void {{apifunc|glVertexAttribIPointer}}( GLuint {{param|index}}, GLint {{param|size}}, GLenum {{param|type}},
 +
    GLsizei {{param|stride}}, const void *{{param|offset}} );
 +
  void {{apifunc|glVertexAttribLPointer}}( GLuint {{param|index}}, GLint {{param|size}}, GLenum {{param|type}},
 +
    GLsizei {{param|stride}}, const void *{{param|offset}} );
  
The {{code|bIsIntegral}} value specifies whether the value is an integer or float. If it is false, then it is a floating-point value; if it is true, then it is integral. As previously mentioned, this ''must'' match with the attribute definition in the shader. This value is set based on which of the attribute functions you use: "IPointer" sets it to true, the regular "Pointer" version sets it to false.
+
All of these functions do more or less the same thing. The difference between them will be discussed later. Note that the last function is only available on GL 4.1 or if {{extref|vertex_attrib_64bit}} is available.
  
{{comp note|In a compatibility profile, you are required to use attribute index 0 for some attribute. If you do not, then rendering will not take place. Both the shader and the vertex array state must have an attribute index 0.}}
+
These functions say that the attribute index {{param|index}} will get its attribute data from whatever buffer object is ''currently bound'' to {{enum|GL_ARRAY_BUFFER}}. It is '''vital''' to understand that this association is made ''when this function is called.'' For example, let's say we do this:
  
==== Data Conversion ====
+
<source lang="cpp">
 +
glBindBuffer(GL_ARRAY_BUFFER, buf1);
 +
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
 +
glBindBuffer(GL_ARRAY_BUFFER, 0);
 +
</source>
  
The {{code|eType}} is the type of the data in the buffer object, not the attribute's type. This value and {{code|bIsNormalized}} define how the value in the buffer is interpreted and converted into the attribute's type in the shader.
+
The first line binds {{code|buf1}} to the {{enum|GL_ARRAY_BUFFER}} binding. The second line says that attribute index 0 gets its vertex array data from {{code|buf1}}, because that's the buffer that was bound to {{enum|GL_ARRAY_BUFFER}} when the {{apifunc|glVertexAttribPointer}} was called.
  
The possible values of {{code|eType}} are broken into categories: integer and floating-point.
+
The third line binds the buffer object 0 to the {{enum|GL_ARRAY_BUFFER}} binding. What does this do to the association between attribute 0 and {{code|buf1}}?
  
*Integer types allow signed or unsigned forms of <tt>bytes, shorts</tt> (16-bit), and <tt>int</tt> (32-bit).  
+
'''''Nothing!''''' Changing the {{enum|GL_ARRAY_BUFFER}} binding changes nothing about vertex attribute 0. Only calls to {{apifunc|glVertexAttribPointer}} can do that.
  
*Floating-point types are <tt>float</tt> (32-bit), <tt>double</tt> (64-bit), or <tt>half</tt> (16-bit [[Small Float Formats|half float values]]).
+
Think of it like this. {{apifunc|glBindBuffer}} sets a global variable, then {{apifunc|glVertexAttribPointer}} reads that global variable and stores it in the VAO. Changing that global variable after it's been read doesn't affect the VAO. You can think of it that way because that's ''exactly'' how it works.
  
Floating-point buffer types convert to floating-point attribute types directly. Integer types can be converted to floating point types in one of two ways: either directly (convert the number into a float value) or via normalization.  
+
This is also why {{enum|GL_ARRAY_BUFFER}} is ''not'' VAO state; the actual association between an attribute index and a buffer is made by {{apifunc|glVertexAttribPointer}}.
  
''Normalization'', activated by setting {{code|bIsNormalized}} to ''true'', causes '''unsigned''' integers to be converted into floating point values on the range <tt>[0, 1]</tt>, and '''signed''' integers are converted to <tt>[-1, 1]</tt>.
+
Note that it is an error to call the {{apifunc|glVertexAttribPointer}} functions if 0 is currently bound to {{enum|GL_ARRAY_BUFFER}}.
*An integer value of <tt>0</tt> gives a normalized float value of <tt>0.0</tt>.
+
*To get a float value of <tt>1.0</tt>, you pass in the largest possible integer for that size (<tt>unsigned bytes == 255, signed bytes == 127</tt>, etc).
+
*To get <tt>-1</tt>, you pass the most negative integer value (<tt>signed bytes == -128, signed shorts == -32768</tt>, etc).
+
  
==== Stride and Interleaving ====
+
== Vertex format ==
  
The placement of vertex attribute data within the buffer is governed by the {{code|eType}}, {{code|iSize}}, {{code|iStride}}, and {{code|pBufferObjectOffset}} fields. The type and size determine how big an individual value for the attribute is; 2 unsigned bytes takes up, well two bytes. The {{code|iStride}} specifies how many bytes it takes to get from one element to the next.
+
The {{apifunc|glVertexAttribPointer}} functions state where an attribute index gets its array data from. But it also defines how OpenGL should interpret that data. Thus, these functions conceptually do two things: set the buffer object information on where the data comes from and define the format of that data.
  
You might think that this would simply be the size of the attribute data: the type * iSize. However, OpenGL is very flexible in how you can position data in the buffer. The stride field allows you to ''interleave'' vertex data within a buffer. That is, you can put the data for multiple attributes in the same buffer.
+
The format parameters describe how to interpret a single vertex of information from the array. [[Vertex Attribute]]s in the [[Vertex Shader]] can be declared as a floating-point GLSL type (such as {{code|float}} or {{code|vec4}}), an integral type (such as {{code|uint}} or {{code|ivec3}}), or a double-precision type (such as {{code|double}} or {{code|dvec4}}). Double-precision attributes are only available in GL 4.1/{{extref|vertex_attrib_64bit}}.
  
Let's say that your vertex data is built by a hard-coded C structure:
+
The general type of attribute used in the vertex shader must match the general type provided by the attribute array. This is governed by which {{apifunc|glVertexAttribPointer}} function you use. For floating-point attributes, you must use {{apifunc|glVertexAttribPointer}}. For integer (both signed and unsigned), you must use {{apifunc|glVertexAttribIPointer}}. And for double-precision attributes, where available, you must use {{apifunc|glVertexAttribLPointer}}.
  
<syntaxhighlight lang="cpp">
+
Each attribute index represents a vector of some type, from 1 to 4 components in length. The {{param|size}} parameter of the {{apifunc|glVertexAttribPointer}} functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4. Note that {{param|size}} does not have to exactly match the size used by the vertex shader. If the vertex shader has fewer components than the attribute provides, then the extras are ignored. If the vertex shader has ''more'' components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the XYZW components. Note that for double-precision inputs (GL 4.1 or {{extref|vertex_attrib_64bit}}), having more components than provided leaves the extra components with undefined values.
struct Vertex
+
{
+
    float x, y, z;
+
    unsigned char r, g, b, a;
+
    float u, v;
+
};
+
</syntaxhighlight>
+
  
You would like your vertex data in your vertex buffer to simply be an array of such structures. So when you bind your buffer and fill it with data, you give it a {{code|Vertex*}} of some particular length.
+
=== Component type ===
  
The position attribute would have a {{code|pBufferObjectOffset}} value of 0, since it is the first entry. It's size is 3 and type is GL_FLOAT. This means that the position attribute of the first vertex in the array comes from the 0'th byte in the buffer, and OpenGL will pull 12 bytes (sizeof(float) * 3) to get that data. However, you need OpenGL to jump forward 24 bytes, the size of {{code|Vertex}} to get to the next position. You do this by setting the {{code|iStride}} to 24.
+
The type of the vector component in the buffer object is given by the {{param|type}} and {{param|normalized}} parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different {{apifunc|glVertexAttribPointer}} functions take different {{param|type}}s. Here is a list of the types and their meanings for each function:
  
The color attribute would have a {{code|pBufferObjectOffset}} value of 12, since they start 3 floats into the struct. The size is 4 and the type is GL_UNSIGNED_BYTE (and it is normalized). The stride is ''also'' 24. Remember that stride is the number of bytes from one vertex value to the next. Each attribute has the same stride: 24.
+
{{apifunc|glVertexAttribPointer}}:
 +
* Floating-point types. {{param|normalized}} must be {{enum|GL_FALSE}}
 +
** {{enum|GL_HALF_FLOAT​}}: A [[Small_Float_Formats#Half_floats|16-bit half-precision floating-point value]]. Equivalent to {{code|GLhalf}}.
 +
** {{enum|GL_FLOAT​}}: A 32-bit single-precision floating-point value. Equivalent to {{code|GLfloat}}.
 +
** {{enum|GL_DOUBLE​}}: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to {{code|GLdouble}}.
 +
** {{enum|GL_FIXED​}}: A 16.16-bit fixed-point two's complement value. Equivalent to {{code|GLfixed}}.
 +
* Integer types; these are converted to floats automatically and ''with zero performance cost''. If {{param|normalized}} is {{enum|GL_TRUE}}, then the value will be converted to a float via integer normalization (an unsigned byte value of 255 becomes 1.0f). If {{param|normalized}} is {{enum|GL_FALSE}}, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f).
 +
** {{enum|GL_BYTE​}}: A signed 8-bit two's complement value. Equivalent to {{code|GLbyte}}.
 +
** {{enum|GL_UNSIGNED_BYTE​}}: An unsigned 8-bit value. Equivalent to {{code|GLubyte}}.
 +
** {{enum|GL_SHORT​}}: A signed 16-bit two's complement value. Equivalent to {{code|GLshort}}.
 +
** {{enum|GL_UNSIGNED_SHORT​}}: An unsigned 16-bit value. Equivalent to {{code|GLushort}}.
 +
** {{enum|GL_INT​}}: A signed 32-bit two's complement value. Equivalent to {{code|GLint}}.
 +
** {{enum|GL_UNSIGNED_INT​}}: An unsigned 32-bit value. Equivalent to {{code|GLuint}}.
 +
** {{enum|GL_INT_2_10_10_10_REV​}}: A series of four values packed in a 32-bit unsigned integer. The packed values themselves are signed, but not the overall bitfield. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the ''first'' component, the next 10 bits are the second component, and so on. All values are ''signed'', two's complement integers. If you use this, the {{param|size}} must be 4 (or [[#D3D compatibility|{{enum|GL_BGRA}}]], as shown below).
 +
** {{enum|GL_UNSIGNED_INT_2_10_10_10_REV}}:  A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the ''first'' component, the next 10 bits are the second component, and so on. If you use this, the {{param|size}} must be 4 (or [[#D3D compatibility|{{enum|GL_BGRA}}]], as shown below).
  
You can compute the stride of any structure that you want to use in a buffer object with the C {{code|sizeof}} feature. If you want to compute the offset of a component within a struct, for the {{code|pBufferObjectOffset}}, you may use the macro {{code|offsetof}} (included in {{code|stddef.h}} header file). This would be done as follows: {{code|offsetof(Vertex, r)}}.
+
{{apifunc|glVertexAttribIPointer}}:
 +
* {{enum|GL_BYTE​}}:
 +
* {{enum|GL_UNSIGNED_BYTE​}}:
 +
* {{enum|GL_SHORT​}}:
 +
* {{enum|GL_UNSIGNED_SHORT​}}:
 +
* {{enum|GL_INT​}}:
 +
* {{enum|GL_UNSIGNED_INT​}}:
  
A stride value of 0 has a special meaning: it means the attribute data is tightly packed. So a stride of zero always means that the stride is {{code|sizeof(eType) * iSize}}.
+
{{apifunc|glVertexAttribLPointer}}:
 +
* {{enum|GL_DOUBLE}}
  
It is very possible, and usually desired for performance reasons, to use a single buffer object for all attributes of a mesh's data. Indeed, thanks to the buffer object offset, it is possible to put the data for ''many'' meshes in the same buffer object. You simply use {{code|glBufferSubData}} or {{code|glMapBufferRange}} to update different sections of it, and then set the offset when you are building your VAO.
+
Here is a visual demonstration of the ordering of the {{enum|2_10_10_10_REV}} types:
  
==== Matrix Attributes ====
+
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
 +
|  W |              Z              |              Y              |              X            |
 +
-----------------------------------------------------------------------------------------------
  
Attributes in GLSL can be of matrix types. However, our attribute binding functions only bind up to a dimensionality of 4. OpenGL solves this problem by converting matrix GLSL attributes into multiple attribute indices.
+
=== D3D compatibility ===
  
If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. How many it takes depends on the height of the matrix: a 2x2 matrix will take 2, while a 2x4 matrix will take 4. The size of each attribute is the width of the matrix.
+
When using {{apifunc|glVertexAttribPointer}}, and ''only'' this function (not the other forms), the {{param|size}} field can be a number 1-4, but it can also be {{enum|GL_BGRA}}.
  
Each bound attribute in the VAO therefore fills in a single row, starting with the top-most and progressing down. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will naturally take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the top row, 4 is the middle, and 5 is the bottom.
+
This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components.
  
If you let GLSL apply attributes manually, and query them with {{code|glGetAttribLocation}}, then OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.
+
This special mode is intended specifically for compatibility with a certain Direct3D format. Because of that, it can ''only'' be used with {{enum|GL_UNSIGNED_BYTE}}, {{enum|GL_INT_2_10_10_10_REV​}} and {{enum|GL_UNSIGNED_INT_2_10_10_10_REV​}}. Also, because if that, {{param|normalized}} ''must'' be {{enum|GL_TRUE}} as well; you cannot pass non-normalized values.
  
=== Index Data ===
+
{{note|This mode should only be used if you have data that is formatted in this D3D style and you need to use it in your GL application. Don't bother otherwise.}}
  
If you intend to use indices for your vertex arrays, you will need an index list. This is simply a buffer object that contains a list of integer values. It is best that this not be the same buffer object you use for attributes.
+
Here is a visual description:
  
To attach this to the VAO, simply call {{code|glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ''bufferObj'');}} with your buffer object. Once attached, any indexed drawing calls will pull from this buffer. Attempting to use indexed drawing calls without a buffer bound to GL_ELEMENT_ARRAY_BUFFER is an [[OpenGL Error|error]].
+
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
 +
| W |             X              |              Y              |              Z            |
 +
-----------------------------------------------------------------------------------------------
  
=== Finished VAO ===
+
Notice how X comes second and the Z last. X is equivalent to R and Z is equivalent to B, so it comes in the reverse of BGRA order: ARGB.
  
At this point, your VAO is finished. You can render with it immediately, or you can bind another VAO and build a new one (or render with it). It's probably a good idea not to change a VAO once you've created it.
+
== Vertex buffer offset and stride ==
  
=== Fixed Attribute Values ===
+
The vertex format information above tells OpenGL how to interpret the data. The format says how big each vertex is in bytes and how to convert it into the values that the attribute in the vertex shader receives.
  
It is legal to render with a vertex shader and VAO pair that do not match completely in terms of attributes. Any attributes the VAO specifies that the vertex shader does not consume are ignored. Whereas any attributes that the vertex shader needs but the VAO does not provide will be taken from a set of fixed attribute data.
+
But OpenGL needs two more pieces of information before it can find the data. It needs a byte offset from the start of the buffer object to the first element in the array. So your arrays don't always have to start at the front of the buffer object. It also needs a stride, which represents how many bytes it is from the start of one element to the start of another.
  
This fixed attribute data is not an array. Therefore, each vertex in the stream will get the same value. The initial value for these is <tt>(0.0, 0.0, 0.0, 1.0)</tt>.
+
The {{param|offset​}} defines the buffer object offset. Note that it is a parameter of type {{code|const void *}} rather than an integer of some kind. This is in part why it's called {{code|glVertexAttrib''Pointer''}}, due to old legacy stuff where this was actually a client pointer.
  
To change the value, you use a function of this form:
+
So you will need to cast the integer offset into a pointer. In C, this is done with a simple cast: {{code|(void*)(byteOffset)}}. In C++, this can be done as such: {{code|reinterpret_cast<void*>(byteOffset)}}.
<source lang="cpp">
+
  void glVertexAttrib*(GLuint ''index'', Type ''values'');
+
</source>
+
The * is the type descriptor, using the traditional OpenGL syntax. There is also a version for setting integral attributes, {{code|glVertexAttribI*}}. The {{code|index}} is the attribute index to change.
+
  
Note that the fixed attribute values are ''not'' part of the VAO state. Changes to them do not affect the VAO.
+
The {{param|stride}} is used to decide if there should be bytes between vertices. If it is set to 0, then OpenGL will assume that the vertex data is tightly packed. So OpenGL will compute the stride from the given other components. So if you set the {{param|size}} to be 3, and {{param|type}} to be {{enum|GL_FLOAT}}, OpenGL will compute a stride of 12 (4 bytes per float, and 3 floats per attribute).
  
{{note|It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for.}}
+
=== Interleaved attributes ===
  
== Drawing ==
+
The main purpose of the {{param|stride}} attribute is to allow interleaving between different attributes. This is conceptually the difference between these two C++ definitions:
  
Once you have a vertex array object (and the appropriate program object used to draw with it, as well as other rendering state), rendering with it is quite easy.
+
<source lang="cpp">
 +
struct StructOfArrays
 +
{
 +
  GLfloat positions[VERTEX_COUNT * 3];
 +
  GLfloat normals[VERTEX_COUNT * 3];
 +
  GLubyte colors[VERTEX_COUNT * 4];
 +
};
 +
 
 +
StructOfArrays structOfArrays;
 +
 
 +
struct Vertex
 +
{
 +
  GLfloat position[3];
 +
  GLfloat normal[3];
 +
  Glubyte color[4];
 +
};
 +
 
 +
Vertex vertices[VERTEX_COUNT];
 +
</source>
  
Regardless of which function you use, you will be required to specify a [[Primitives|primitive]] type. This will be how OpenGL interprets the vertex data for that object.
+
{{code|structOfArrays}} is a struct that contains several arrays of elements. Each array is tightly packed, but independent of one another. {{code|vertices}} is a ''single array'', where each element of the array is an independent vertex.
  
As previously mentioned, there are two ways to send a stream of vertices: indexed and unindexed. All functions that draw indexd have the form {{code|glDraw*Elements*}}, while functions that draw unindexed have the form {{code|glDraw*Arrays*}}.
+
If we have a buffer object which has had {{code|vertices}} uploaded to it, such that {{code|baseOffset}} is the byte offset to the start of this data, we can use the {{param|stride}} parameter to allow OpenGL to access it:
  
=== Basic Drawing ===
 
The basic drawing functions are {{code|glDrawArrays}} and {{code|glDrawElements}}:
 
 
<source lang="cpp">
 
<source lang="cpp">
  void glDrawArrays( GLenum mode, GLint first, GLsizei count );
+
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(position, Vertex)));
  void glDrawElements( GLenum mode, GLsizei count, GLenum type, void * indices );
+
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(normal, Vertex)));
 +
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(color, Vertex)));
 
</source>
 
</source>
where, for <tt>glDrawArrays</tt>:
 
*''mode'' parameter is the [[Primitives|primitive]] type.
 
*''first'' and ''count'' values in define the range of elements to be pulled from the buffer.
 
as for <tt>glDrawElements</tt>:
 
*''count'' and ''indices'' parameters define the range of indices.
 
**''count'' defines how many indices to use.
 
**''indices'' defines the offset into the index buffer object (bound to <tt>GL_ELEMENT_ARRAY_BUFFER</tt>, stored in the VAO) to begin reading data.
 
*''type'' field describes what the type of the indices are:
 
**<tt>GL_UNSIGNED_BYTE</tt>
 
**<tt>GL_UNSIGNED_SHORT</tt>
 
**<tt>GL_UNSIGNED_INT</tt>
 
  
=== Optimizations ===
+
Note that each attribute uses the same stride: the size of the {{code|Vertex}} struct. C/C++ requires that the size of this struct be padded where appropriately such that you can get the next element in an array by adding that size in bytes to a pointer (ignoring pointer arithmetic, which will do all of this for you). Thus, the size of the {{code|Vertex}} structure is exactly the number of bytes from the start of one element to another, for ''each'' attribute.
  
The basic drawing functions are all you really need in order to send vertices for rendering. However, there are a number of ways to draw that optimize certain rendering cases.
+
The macro [http://en.cppreference.com/w/cpp/types/offsetof {{code|offsetof}}] computes the byte offset of the given field in the given struct. This is added to the {{code|baseOffset}}, so that each field points to the start of its own data relative to the beginning of the {{code|Vertex}} structure.
  
Rendering with a different VAO from the last rendering command is usually a relatively expensive operation. So many of the optimization mechanisms are based on you storing the data for several meshes in the same buffer objects with the same vertex formats and other VAO data.
+
As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time.
  
==== Multi-Draw ====
+
== Index buffers ==
  
Binding a VAO is often an expensive operation. And there are many cases where you want to render a number of distinct meshes with a single draw call. All of the meshes must be in the same VAO, as must all of the index arrays if you are doing indexed rendering. Also, of course, they must use the same shader program with the same uniform values.
+
Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a [[Buffer Object]], sometimes called an Index Buffer Object or Element Buffer Object.
  
To render multiple primitives from a VAO at once, use this:
+
This buffer object is associated with the {{enum|GL_ELEMENT_ARRAY_BUFFER}} binding target. This buffer object binding point is different from {{enum|GL_ARRAY_BUFFER}}; it is ''stored within the VAO''. This binding point is part of the VAO's state, and if no VAO is bound, then you cannot bind a buffer object to this binding target.
<source lang="cpp">
+
  void glMultiDrawArrays( GLenum mode, GLint * first, GLsizei * count, GLsizei primcount );
+
</source>
+
  
This function is conceptually implemented as:
+
When a buffer is bound to {{enum|GL_ELEMENT_ARRAY_BUFFER}}, all [[Vertex Rendering|rendering commands]] of the form {{code|gl*Draw*Elements*}} will use the element buffer for indexed rendering. Indices can be unsigned bytes, unsigned shorts, or unsigned ints.
  
<syntaxhighlight lang="cpp">
+
== Instanced arrays ==
void glMultiDrawArrays( GLenum mode, GLint *first, GLsizei *count, GLsizei primcount )
+
{{infobox feature
{
+
| name = Instanced arrays
for (int i = 0; i < primcount; i++)
+
| core = 3.3
{
+
| arb_extension = {{extref|instanced_arrays}}
if (count[i] > 0)
+
}}
glDrawArrays(mode, first[i], count[i]);
+
}
+
}
+
</syntaxhighlight>
+
  
Of course, you could write this function yourself. However, because it all happens in a single OpenGL call, the implementation has the opportunity to optimize this beyond what you could write.
+
Normally, vertex attribute arrays are indexed based on the index buffer, or when doing array rendering, once per vertex from the start point to the end. However, when doing [[Vertex_Rendering#Instancing|instanced rendering]], it is often useful to have an alternative means of getting per-instance data than accessing it directly in the shader via a [[Uniform Buffer Object]], a [[Buffer Texture]], or some other means.
  
There is an indexed form as well:
+
It is possible to have one or more attribute arrays indexed, not by the index buffer or direct array access, but by the ''instance count''. This is done via this function:
<source lang="cpp">
+
<br style="clear: both" />
  void glMultiDrawElements( GLenum mode, GLsizei * count, GLenum type, void ** indices, GLsizei primcount );
+
void {{apifunc|glVertexAttribDivisor}}(GLuint {{param|index}}, GLuint {{param|divisor}});
</source>
+
  
Similarly, this is implemented conceptually as:
+
The {{param|index}} is the attribute index to set. If {{param|divisor}} is zero, then the attribute acts like normal, being indexed by the array or index buffer. If {{param|divisor}} is non-zero, then the current instance (as if from {{code|gl_InstanceID}}) is divided by this divisor, and the result of that is used to access the attribute array.
  
<syntaxhighlight lang="cpp">
+
This is generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method in some respects. Virtually every OpenGL implementation only offers 16 4-vector attributes, some of which will be the actual per-vertex data. So that leaves less for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller.
void glMultiDrawElements( GLenum mode, GLsizei *count, GLenum type, void **indices, GLsizei primcount )
+
{
+
for (int i = 0; i < primcount; i++)
+
{
+
if (count[i]) > 0)
+
glDrawElements(mode, count[i], type, indices[i]);
+
}
+
}
+
</syntaxhighlight>
+
  
Multi-draw is useful for circumstances where you know that you are going to draw a lot of separate primitives of the same kind that all use the same shader. Typically, this would be a single conceptual object that you would always draw together in the same way. You simply pack all of the vertex data into the same VAO and buffer objects, using the various offsets to pick and choose between them.
+
However, that should be plenty for a quaternion orientation and a position, for a simple transformation. That would even leave one float (the position only needs to be 3D) to provide a fragment shader an index to access an [[Array Texture]].
  
==== Primitive Restart ====
+
== Separate attribute format ==
 +
{{infobox feature
 +
| name = Separate attribute format
 +
| core = 4.3
 +
| core_extension = {{extref|vertex_attrib_binding}}
 +
}}
  
Primitive restart functionality allows you to tell OpenGL that a particular index value means, not to source a vertex at that index, but to begin a new primitive of the same type with the next vertex. In essence, it is an alternative to {{code|glMultiDrawElements}}. This allows you to have an element buffer that contains multiple triangle strips or fans (or similar primitives where the start of a primitive has special behavior).
+
{{apifunc|glVertexAttribPointer}} and its variations are nice, but they unify two separate concepts into one function (from a hardware perspective): the vertex format for an attribute array, and the source data for that array. These concepts can be separated, allowing the user to separately specify the format of a vertex attribute from the source buffer. This also makes it easy to change the buffer binding for multiple attributes, since different attributes can pull from the same buffer location.
  
The way it works is with the function {{code|glPrimitiveRestartIndex}}. This function takes an index value. If this index is found in the index array, the system will start the primitive processing again as though a second rendering command had been issued. If you use a BaseVertex drawing function, this test is done ''before'' the base vertex is added to the restart. Using this feature also requires using {{code|glEnable(GL_PRIMITIVE_RESTART);}} to activate it, and the corresponding {{code|glDisable}} to turn it off.
+
This separation is achieved by splitting the state into two pieces: a number of vertex buffer binding points, and a number of vertex format records.
  
Here is an example. Let's say you have an index array as follows:
+
The buffer binding points aggregate the following data:
  
  { 0 1 2 3 65535 2 3 4 5 }
+
* The source buffer object.
 +
* The base offset for all vertex attributes that pull data from this binding point.
 +
* The stride for all vertex attributes that pull data from this binding point.
 +
* The instance divisor, which is used for all vertex attributes that pull data from this binding point.
  
If you render this as a triangle strip normally, you get 7 triangles. If you render it with {{code|glPrimitiveRestartIndex(65535)}} and the primitive restart enabled, then you will get 4 triangles:  
+
The vertex format consists of:
  {0 1 2}, {1 2 3}, {2 3 4}, {3 4 5}
+
  
Primitive restart works with any of the versions of these functions.
+
* The size, type and normalization of the vertex attribute data.
 +
* The buffer binding point it is associated with.
 +
* A byte offset from the base offset of its associated buffer binding point to where it's vertex data starts.
  
{{warning|It is technically legal to use this with non-indexed rendering. You should not do this, as it will not give you a useful result.}}
+
The functions that set the buffer binding point data are:
  
==== Base Index ====
+
void {{apifunc|glBindVertexBuffer}}(GLuint {{param|bindingindex}}, GLuint {{param|buffer}}, GLintptr {{param|offset}}, GLintptr {{param|stride}});
 +
void {{apifunc|glVertexBindingDivisor}}(GLuint {{param|bindingindex}}, GLuint {{param|divisor}});
  
All of the ''glVertexAttribArray'' calls define the format of the vertices. That is, the way the vertex data is stored in the buffer objects. Changing this format is somewhat expensive in terms of performance.
+
{{apifunc|glBindVertexBuffer}} is kind of like {{apifunc|glBindBufferRange}}, but it is specifically intended for vertex buffer objects. The {{param|bindingindex}} is, as the name suggests, ''not a vertex attribute''. It is a binding index, which can range from 0 to {{enum|GL_MAX_VERTEX_ATTRIB_BINDINGS}} - 1. This will almost certainly be 16.
  
If you have a number of meshes that all share the same vertex format, it would be useful to be able to put them all in a single set of buffer objects, one after the other. If we have two meshes, A and B, then their data would look like this:
+
{{param|buffer}} is the buffer object that is being bound to this binding index. Note that there is no need to bind the buffer to {{enum|GL_ARRAY_BUFFER}}; the function takes the buffer object directly. {{param|offset}} is a byte offset from the beginning of the buffer to where all of the associated attached buffer object data begins. {{param|stride}} is the byte offset from one vertex to the next.
  
  [A00 A01 A02 A03 A04... A''nn'' B00 B01 B02... B''mm'']
+
Notice that the stride is uncoupled from the vertex format itself here. Also, {{param|stride}} can no longer be 0, since OpenGL doesn't know the format of the data yet, so it can't automatically compute it.
  
B's mesh data immediately follows A's mesh data, with no breaks inbetween.
+
{{apifunc|glVertexBindingDivisor}} is much like {{apifunc|glVertexAttribDivisor}}, except applied to a binding index instead of an attribute index. All vertex attributes associated with this binding index will use the same divisor.
  
The ''glDrawArrays'' call takes a start index. If we are using unindexed rendering, then this is all we need. We call glDrawArrays once with 0 as the start index and ''nn'' as the array count. Then we call it again with ''nn'' as the start index and ''mm'' as the array count.
+
The functions that affect vertex attribute formats are:
  
Indexed rendering is often very useful, both for memory saving and performance. So it would be great if we can preserve this performance saving optimization when using indexed rendering.
+
void {{apifunc|glVertexAttribFormat}}(GLuint {{param|attribindex}}, GLint {{param|size}}, GLenum {{param|type}}, GLboolean {{param|normalized}}, GLuint {{param|relativeoffset}});
 +
void {{apifunc|glVertexAttribIFormat}}(GLuint {{param|attribindex}}, GLint {{param|size}}, GLenum {{param|type}}, GLuint {{param|relativeoffset}});
 +
void {{apifunc|glVertexAttribLFormat}}(GLuint {{param|attribindex}}, GLint {{param|size}}, GLenum {{param|type}}, GLuint {{param|relativeoffset}});
  
In indexed rendering, each mesh also has an index buffers. glDrawElements takes an offset into the index buffer, so we can use the same mechanism to select which sets of indices to use.
+
The {{apifunc|glVertexAttribFormat}} functions work similarly to their {{apifunc|glVertexAttribPointer}} counterparts (it even takes {{enum|GL_BGRA}} for the size in the same way as the original). {{param|attribindex}} is, as the name suggests, an actual attribute index, from 0 to {{enum|GL_MAX_VERTEX_ATTRIBS​}} - 1. {{param|size}}, {{param|type}}, and {{param|normalized}} all work as before.
  
The problem is the contents of these indices. The third vertex of mesh B is technically index 02. However, the actual index is determined by the location of that vertex relative to where the format was defined. And since we're trying to avoid redefining the format, the format still points to the start of the buffer. So the third vertex of mesh B is actually at index 02 + ''nn''.
+
{{param|relativeoffset}} is new. Vertex formats are associated with vertex buffer bindings from {{apifunc|glBindVertexBuffer}}. So every vertex format that uses the same vertex buffer binding will use the same buffer object and the same offset. In order to allow interleaving (where different attributes need to offset themselves from the base offset), {{param|relativeoffset}} is used. It is effectively added to the buffer binding's offset to get the offset for this attribute.
  
We could in fact store these indices in the index buffer that way. We could go through all of mesh B's indices and add ''nn'' to them. But we don't have to.
+
Note that {{param|relativeoffset}} has much more strict limits than the buffer binding's {{param|offset}}. The limit on {{param|relativeoffset}} is queried through {{enum|GL_MAX_VERTEX_ATTRIB_RELATIVE_OFFSET}}, and is only guaranteed to be at least 2047 bytes. Also, note that {{param|relativeoffset}} is a {{code|GLuint}} (32-bits), while {{param|offset}} is a {{code|GLintptr}}, which is the size of the pointer (so 64-bits in a 64-bit build). So obviously the {{param|relativeoffset}} is a much more limited quantity.
 +
 
 +
To associate a vertex attribute with a buffer binding, use this function:
 +
 
 +
void {{apifunc|glVertexAttribBinding}}(GLuint {{param|attribindex}}, GLuint {{param|bindingindex}});
 +
 
 +
The {{param|attribindex}} will use the buffer, offset, stride, and divisor, from {{param|bindingindex}}.
 +
 
 +
Note that you still have to ''enable'' attribute arrays; this feature doesn't change that fact. It only changes the need to use {{apifunc|glVertexAttribPointer}}.
 +
 
 +
This can be bit confusing, but it makes a lot more sense than the {{apifunc|glVertexAttribPointer}} method once you see it. The simplest way is to go back to the {{code|Vertices}} example from the [[#Interleaved arrays|interleaving section]]. We have this struct of vertex data:
  
Instead, we can use this function:
 
 
<source lang="cpp">
 
<source lang="cpp">
   void glDrawElementsBaseVertex( GLenum mode,
+
struct Vertex
GLsizei count,
+
{
GLenum type,
+
   GLfloat position[3];
void * indices,
+
  GLfloat normal[3];
GLint basevertex );
+
  Glubyte color[4];
 +
};
 +
 +
Vertex vertices[VERTEX_COUNT];
 
</source>
 
</source>
This works as glDrawElements does, except that {{code|basevertex}} is added to each index before pulling from the vertex data. So for mesh A, we pass a base vertex of 0 (or just use <tt>glDrawElements</tt>), and for mesh B, we pass a base vertex of ''nn''.
 
  
{{note|When combining with primitive restart, the restart test happens ''before'' the base index is added to the index.}}
+
Using {{apifunc|glVertexAttribPointer}}, we bound this data like this:
  
==== Instancing ====
+
<source lang="cpp">
 
+
glBindBuffer(GL_ARRAY_BUFFER, buff);
It is often useful to be able to render multiple copies of the same mesh in different locations. If you're doing this with small numbers, like 5-20 or so, multiple draw commands with shader uniform changes between them (to tell which is in which location) is reasonably fast in performance. However, if you're doing this with large numbers of meshes, like 5,000+ or so, then it can be a performance problem.
+
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(position, Vertex)));
 +
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(normal, Vertex)));
 +
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(color, Vertex)));
 +
</source>
  
''Instancing'' is a way to get around this. The idea is that your vertex shader has some internal mechanism for deciding where each instance of the rendered mesh goes based on a single number. Perhaps it has a table (stored in a [[Buffer Texture]] or [[Uniform Buffer Object]]) that it indexes with the instance number to get the per-instance data it needs. Or perhaps it has a simple algorithm for computing the location of an instance based on its number.
+
Now, here is how we would do it using the new APIs:
  
Regardless of the mechanism, it is based the shader getting an instance number that changes only when it is rendering a new instance. If you want to do instanced rendering, you call:
 
 
<source lang="cpp">
 
<source lang="cpp">
  void glDrawArraysInstanced( GLenum mode,
+
glBindVertexBuffer(0, buff, baseOffset, sizeof(sizeof(Vertex)));
GLint first,  
+
GLsizei count,  
+
GLsizei primcount );
+
  
  void glDrawElementsInstanced( GLenum mode,  
+
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, offsetof(position, Vertex));
GLsizei count,  
+
glVertexAttribBinding(0, 0);
GLenum type,  
+
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(normal, Vertex));
const void * indices,  
+
glVertexAttribBinding(1, 0);
GLsizei primcount );
+
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(color, Vertex));
 +
glVertexAttribBinding(2, 0);
 
</source>
 
</source>
  
It will send the same vertices {{code|primcount}} number of times, as though you called {{code|glDrawArrays/Elements}} in a loop of {{code|primcount}} length. However, the vertex shader is given a special input value: {{code|gl_InstanceID}}. It will receive a value from 0 to {{code|primcount-1}} based on which instance of the mesh is being rendered. This is the only mechanism the vertex shader has for differentiating between instances; it is up to the shader itself to decide how to use this information.
+
That's much clearer. The base offset to the beginning of the vertex data is very clear, as is the offset from this base to the start of each attribute. Better yet, if you want to use the same format but move the buffer around, it only takes one function call; namely {{apifunc|glBindVertexBuffer}} with a buffer binding of 0.
  
==== Range ====
+
Indeed, if lots of vertices use the same format, you can interleave them in the same way and only ''ever'' change the source buffer. This separation of buffer/stride/offset from vertex format can be a powerful optimization.
  
Implementations of OpenGL can often find it useful to know how much vertex data is being used in a buffer object. For non-indexed rendering, this is pretty easy to determine: the {{code|first}} and {{code|count}} parameters of the Arrays functions gives you appropriate information. For indexed rendering, this is more difficult, as the index buffer can use potentially any index up to its size.
+
Note again that all of the above state is still VAO state. It is all encapsulated in vertex array objects.
  
Still for optimization purposes, it is useful for implementations to know the range of indexed rendering data. Implementations may even read index data manually to determine this.
+
Because all of this modifies how vertex attribute state works, {{apifunc|glVertexAttribPointer}} is redefined in terms of this new division. It is defined as follows
  
The "Range" series of {{code|glDrawElements}} commands allows the user to specify that this indexed rendering call will never cause indices outside of the given range of values to be sourced. The call works as follows:
 
 
<source lang="cpp">
 
<source lang="cpp">
  void glDrawRangeElements( GLenum mode,  
+
void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
GLuint start,  
+
{
GLuint end,  
+
  glVertexAttrib*Format(index, size, type, {normalized,} 0);
GLsizei count,  
+
  glVertexAttribBinding(index, index);
GLenum type,  
+
 
void * indices );
+
  GLuint buffer;
 +
  glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
 +
  if(buffer == 0)
 +
    glErrorOut(GL_INVALID_OPERATION); //Give an error.
 +
 
 +
  if(stride == 0)
 +
    stride = CalcStride(size, type);
 +
 
 +
  GLintptr offset = reinterpret_cast<GLintptr>(pointer);
 +
  glBindVertexBuffer(index, buffer, offset, stride);
 +
}
 
</source>
 
</source>
Unlike the "Arrays" functions, the {{code|start}} and {{code|end}} parameters specify the minimum and maximum index values (from the element buffer) that this draw call will use (rather than a first and count-style). If you try to violate this restriction, you will get implementation-behavior (ie: rendering may work fine or you may get garbage).
 
  
There is one index that is allowed outside of the area bound by {{code|start}} and {{code|end}}: the primitive restart index. If primitive restart is set and enabled, it does not have to be within the given boundary.
+
Where {{code|CalcStride}} does what it sounds like. Note that {{apifunc|glVertexAttribPointer}} does use the same index for the attribute format and the buffer binding. So calling it will overwrite anything you may have set into these bindings.
  
Implementations may have a specific "sweet spot" for the range of indices, such that using indices within this range will have better performance. They expose such values with a pair of {{code|glGetIntegerv}} enumerators. To get the best performance, {{code|end - start}} should be less than or equal to GL_MAX_ELEMENTS_VERTICES, and {{code|count}} (the number of indices to be rendered) should be less than or equal to GL_MAX_ELEMENTS_INDICES.
+
Similarly, {{apifunc|glVertexAttribDivisor}} is defined as:
  
==== Combinations ====
+
<source lang="cpp">
 +
void glVertexAttribDivisor(GLuint index​, GLuint divisor​)
 +
{
 +
  glVertexAttribBinding(index, index);
 +
  glVertexBindingDivisor(index, divisor);
 +
}
 +
</source>
  
It is often useful to combine these optimization techniques. Primitive restart can be combined with any of them, so long as they are using indexed rendering. The primitive restart comparison test, in the case of BaseVertex calls, is done ''before'' the base index is added to the index from the mesh.
+
So again, calling it will overwrite your vertex attribute format binding.
  
Base vertex can be combined with any one of MultiDraw, Range, or Instancing. These functions are:
+
Note that while {{extref|vertex_attrib_binding}} is still a new extension at the time of writing, it is not a hardware-based one. So it should be widely implemented on hardware that is still supported by OpenGL as they get around to it.
<source lang="cpp"> 
+
  void glMultiDrawElementsBaseVertex( GLenum mode,
+
GLsizei *count,
+
GLenum type,
+
void **indices,
+
GLsizei primcount,
+
GLint *basevertex );
+
  
  void glDrawRangeElementsBaseVertex( GLenum mode,
+
== Matrix attributes ==
GLuint start,
+
GLuint end,
+
GLsizei count,
+
GLenum type,
+
void *indices,
+
GLint basevertex );
+
  
  void glDrawElementsInstancedBaseVertex( GLenum mode,  
+
Attributes in GLSL can be of matrix types. However, our attribute binding functions only bind up to a dimensionality of 4. OpenGL solves this problem by converting matrix GLSL attributes into multiple attribute indices.
GLsizei count,
+
GLenum type,
+
const void *indices,
+
GLsizei primcount,
+
GLint basevertex );
+
</source>
+
In the case of MultiDraw, the ''basevertex'' is an array, so each primitive can have its own base index.
+
  
None of the other features can be combined with one another. So Range does not combine with MultiDraw.
+
If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. The number of attributes a matrix takes up depends on the number of columns of the matrix: a {{code|mat2}} matrix will take 2, a {{code|mat2x4}} matrix will take 2, while a {{code|mat4x2}} will take 4. The size of each attribute is the number of rows of the matrix.
 +
 
 +
Each bound attribute in the VAO therefore fills in a single column, starting with the left-most and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will naturally take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the first column, 4 is the second, and 5 is the last.
 +
 
 +
OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.
 +
 
 +
Double-precision matrices (where available) will take up twice as much space. So a {{code|dmat3x3}} will take up 6 attribute indices, two for each column.
 +
 
 +
== Non-array attribute values ==
 +
 
 +
A vertex shader can read an attribute that is not currently enabled (via {{apifunc|glEnableVertexAttribArray}}. The value that it gets is defined by special context state, which is *not* part of the VAO.
 +
 
 +
Because the attribute is defined by context state, it is constant over the course of a single [[Vertex Rendering|draw call]]. Each attribute index has a separate value.
 +
 
 +
The initial value for these is a floating-point {{code|(0.0, 0.0, 0.0, 1.0)}}. Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).
 +
 
 +
To change the value, you use a function of this form:
 +
 
 +
  void {{apifunc|glVertexAttrib|*}}(GLuint {{param|index}}, Type {{param|values}});
 +
  void {{apifunc|glVertexAttrib|N*}}(GLuint {{param|index}}, Type {{param|values}});
 +
  void {{apifunc|glVertexAttrib|P*}}(GLuint {{param|index}}, GLenum {{param|type}}, GLboolean {{param|normalized}}, Type {{param|values}});
 +
  void {{apifunc|glVertexAttrib|I*}}(GLuint {{param|index}}, Type {{param|values}});
 +
  void {{apifunc|glVertexAttrib|L*}}(GLuint {{param|index}}, Type {{param|values}});
 +
 
 +
The * is the type descriptor, using the traditional OpenGL syntax. The {{param|index}} is the attribute index to set. The {{code|Type}} is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes. And just as for attributes provided by arrays, double-precision inputs (GL 4.1 or {{extref|vertex_attrib_64bit}}) that having more components than provided leaves the extra components with undefined values.
 +
 
 +
The {{code|N}} version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The {{code|P}} versions are for packed integer types, and they can be normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.
 +
 
 +
To provide non-array integral values for integral attributes, use the {{code|I}} versions. For double-precision attributes (using the same rules for attribute sizes as double-precision arrays), use {{code|L}}.
 +
 
 +
Note that the fixed attribute values are ''not'' part of the VAO state; they are context state. Changes to them do not affect the VAO.
 +
 
 +
{{note|It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for. They might be faster than uniforms, or they might not.}}
 +
 
 +
== Drawing ==
 +
{{main|Vertex Rendering}}
 +
 
 +
Once the VAO has been properly set up, the arrays of vertex data can be rendered as a [[Primitive]]. OpenGL provides innumerable different options for rendering vertex data.
  
 
== See Also ==
 
== See Also ==
  
* [[Primitives]]
+
* [[Primitive]]
 +
* [[Vertex Rendering]]
 
* [[Conditional Rendering]]
 
* [[Conditional Rendering]]
 +
* [[Vertex Attribute]]
 
* [[Vertex Specification Best Practices]]
 
* [[Vertex Specification Best Practices]]
 +
 +
== Reference ==
 +
 +
* [[:Category:Core API Ref Vertex Arrays]]: Reference documentation for vertex array setup functions.
 +
* [[:Category:Core API Ref Vertex Specification]]: Reference documentation for functions that affect certain state used to render.
 +
 +
 +
[[Category: General OpenGL]]
 
[[Category:Vertex Specification]]
 
[[Category:Vertex Specification]]

Revision as of 19:49, 20 January 2013

Vertex Specification is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.

Theory

Submitting vertex data for rendering requires creating a stream of vertices, and then telling OpenGL how to interpret that stream.

Vertex Stream

In order to render at all, you must be using a shader program. This program has a list of expected Vertex Attributes. This set of attributes determines what attribute values you must send in order to properly render with this shader.

For each attribute in the shader, you must provide a list of data for that attribute. All of these lists must have the same number of elements.

The order of vertices in the stream is very important; it determines how OpenGL will render your mesh. The order of the stream can either be the order of data in the arrays, or you can specify a list of indices. The indices control what order the vertices are received in, and indices can specify the same vertex more than once.

Let's say you have the following as your array of 3d position data:

 { {1, 1, 1}, {0, 0, 0}, {0, 0, 1} }

If you simply use this as a stream as is, OpenGL will receive and process these three vertices in order (left-to-right). However, you can also specify a list of indices that will select which vertices to use and in which order.

Let's say we have the following index list:

 {2, 1, 0, 2, 1, 2}

If we render with the above attribute array, but selected by the index list, OpenGL will receive the following stream of vertex attribute data:

 { {0, 0, 1}, {0, 0, 0}, {1, 1, 1}, {0, 0, 1}, {0, 0, 0}, {0, 0, 1} }

The index list is a way of reordering the vertex attribute array data without having to actually change it. This is mostly useful as a means of data compression; in most tight meshes, vertices are used multiple times. Being able to store the vertex attributes for that vertex only once is very economical, as a vertex's attribute data is generally around 32 bytes, while indices are usually 2-4 bytes in size.

A vertex stream can of course have multiple attributes. You can take the above position array and augment it with, for example, a texture coordinate array:

 { {0, 0}, {0.5, 0}, {0, 1} }

The vertex stream you get will be as follows:

 { [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{1, 1, 1}, {0, 0}], [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{0, 0, 1}, {0, 1}] }
Note: Oftentimes, authoring tools will have similar attribute arrays, but the sizes will be different. These tools give each attribute array a separate index list; this makes each attribute list smaller. OpenGL (and Direct3D, if you're wondering) does not allow this. Each attribute array must be the same size, and each index corresponds to the same location in each attribute array.. You must manually convert the format exported by your authoring tool into the format described above.

Primitives

The above stream is not enough to actually get anything; you must tell OpenGL how to interpret this stream in order to get proper rendering. And this means telling OpenGL what kind of primitive to interpret the stream as.

There are many ways for OpenGL to interpret a stream of 12 vertices. It can interpret the vertices as a sequence of triangles, points, or lines. It can even interpret these differently; it can interpret 12 vertices as 4 independent triangles (take every 3 verts as a triangle), as 10 dependent triangles (every group of 3 sequential vertices in the stream is a triangle), and so on.

The main article on this subject has the details.

Now that we understand the theory, let's look at how it is implemented in OpenGL. Vertex data is provided to OpenGL as arrays. Thus, OpenGL needs two things: the arrays themselves and a description of how to interpret the bytes of those arrays.

Vertex Array Object

Vertex Array Object
Core in version 4.5
Core since version 3.0
Core ARB extension ARB_vertex_array_object

A Vertex Array Object (VAO) is an OpenGL Object that encapsulates all of the state needed to specify vertex data (with one minor exception noted below). They define the format of the vertex data as well as the sources for the vertex arrays. Note that VAOs do not contain the arrays themselves; the arrays are stored in Buffer Objects (see below). The VAOs simply reference already existing buffer objects.

As OpenGL Objects, VAOs have the usual creation, destruction, and binding functions: glGenVertexArrays, glDeleteVertexArrays, and glBindVertexArray. The latter is different, in that there is no "target" parameter; there is only one target for VAOs, and glBindVertexArray binds to that target.

Note: VAOs cannot be shared between OpenGL contexts.

Vertex attributes are numbered from 0 to GL_MAX_VERTEX_ATTRIBS - 1. Each attribute array can be enabled for array access or disabled. When an attribute array is disabled, any attempts by the vertex shader to read from that attribute will produce a constant value (see below) instead of a value pulled from an array.

A newly-created VAO has all of the arrays disabled. Arrays are enabled by binding the VAO in question and calling:

void glEnableVertexAttribArray(GLuint index​);

There is a similar glDisableVertexAttribArray function to disable an enabled array.

Remember: all of the state below is part of the VAO's state (except where explicitly stated that it is not). Thus, all of the state below is captured by the VAO.

Vertex Buffer Object

A Vertex Buffer Object (VBO) is a Buffer Object which is used as the source for vertex array data. It is no different from any other buffer object, and a buffer object used for Transform Feedback or asynchronous pixel transfers can be used as source values for vertex arrays.

The format and source buffer for an attribute array can be set by doing the following. First, the buffer that the attribute comes from must be bound to GL_ARRAY_BUFFER.

Note: The GL_ARRAY_BUFFER binding is NOT part of the VAO's state! I know that's confusing, but that's the way it is.

Once the buffer is bound, call one of these functions:

 void glVertexAttribPointer( GLuint index​, GLint size​, GLenum type​,
   GLboolean normalized​, GLsizei stride​, const void *offset​);
 void glVertexAttribIPointer( GLuint index​, GLint size​, GLenum type​,
   GLsizei stride​, const void *offset​ );
 void glVertexAttribLPointer( GLuint index​, GLint size​, GLenum type​,
   GLsizei stride​, const void *offset​ );

All of these functions do more or less the same thing. The difference between them will be discussed later. Note that the last function is only available on GL 4.1 or if ARB_vertex_attrib_64bit is available.

These functions say that the attribute index index​ will get its attribute data from whatever buffer object is currently bound to GL_ARRAY_BUFFER. It is vital to understand that this association is made when this function is called. For example, let's say we do this:

glBindBuffer(GL_ARRAY_BUFFER, buf1);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);

The first line binds buf1​ to the GL_ARRAY_BUFFER binding. The second line says that attribute index 0 gets its vertex array data from buf1​, because that's the buffer that was bound to GL_ARRAY_BUFFER when the glVertexAttribPointer was called.

The third line binds the buffer object 0 to the GL_ARRAY_BUFFER binding. What does this do to the association between attribute 0 and buf1​?

Nothing! Changing the GL_ARRAY_BUFFER binding changes nothing about vertex attribute 0. Only calls to glVertexAttribPointer can do that.

Think of it like this. glBindBuffer sets a global variable, then glVertexAttribPointer reads that global variable and stores it in the VAO. Changing that global variable after it's been read doesn't affect the VAO. You can think of it that way because that's exactly how it works.

This is also why GL_ARRAY_BUFFER is not VAO state; the actual association between an attribute index and a buffer is made by glVertexAttribPointer.

Note that it is an error to call the glVertexAttribPointer functions if 0 is currently bound to GL_ARRAY_BUFFER.

Vertex format

The glVertexAttribPointer functions state where an attribute index gets its array data from. But it also defines how OpenGL should interpret that data. Thus, these functions conceptually do two things: set the buffer object information on where the data comes from and define the format of that data.

The format parameters describe how to interpret a single vertex of information from the array. Vertex Attributes in the Vertex Shader can be declared as a floating-point GLSL type (such as float​ or vec4​), an integral type (such as uint​ or ivec3​), or a double-precision type (such as double​ or dvec4​). Double-precision attributes are only available in GL 4.1/ARB_vertex_attrib_64bit.

The general type of attribute used in the vertex shader must match the general type provided by the attribute array. This is governed by which glVertexAttribPointer function you use. For floating-point attributes, you must use glVertexAttribPointer. For integer (both signed and unsigned), you must use glVertexAttribIPointer. And for double-precision attributes, where available, you must use glVertexAttribLPointer.

Each attribute index represents a vector of some type, from 1 to 4 components in length. The size​ parameter of the glVertexAttribPointer functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4. Note that size​ does not have to exactly match the size used by the vertex shader. If the vertex shader has fewer components than the attribute provides, then the extras are ignored. If the vertex shader has more components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the XYZW components. Note that for double-precision inputs (GL 4.1 or ARB_vertex_attrib_64bit), having more components than provided leaves the extra components with undefined values.

Component type

The type of the vector component in the buffer object is given by the type​ and normalized​ parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different glVertexAttribPointer functions take different type​s. Here is a list of the types and their meanings for each function:

glVertexAttribPointer:

  • Floating-point types. normalized​ must be GL_FALSE
    • GL_HALF_FLOAT​: A 16-bit half-precision floating-point value. Equivalent to GLhalf​.
    • GL_FLOAT​: A 32-bit single-precision floating-point value. Equivalent to GLfloat​.
    • GL_DOUBLE​: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to GLdouble​.
    • GL_FIXED​: A 16.16-bit fixed-point two's complement value. Equivalent to GLfixed​.
  • Integer types; these are converted to floats automatically and with zero performance cost. If normalized​ is GL_TRUE, then the value will be converted to a float via integer normalization (an unsigned byte value of 255 becomes 1.0f). If normalized​ is GL_FALSE, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f).
    • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte​.
    • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte​.
    • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort​.
    • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort​.
    • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint​.
    • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint​.
    • GL_INT_2_10_10_10_REV​: A series of four values packed in a 32-bit unsigned integer. The packed values themselves are signed, but not the overall bitfield. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. All values are signed, two's complement integers. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).

glVertexAttribIPointer:

  • GL_BYTE​:
  • GL_UNSIGNED_BYTE​:
  • GL_SHORT​:
  • GL_UNSIGNED_SHORT​:
  • GL_INT​:
  • GL_UNSIGNED_INT​:

glVertexAttribLPointer:

  • GL_DOUBLE

Here is a visual demonstration of the ordering of the 2_10_10_10_REV types:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              Z              |              Y              |               X            |
-----------------------------------------------------------------------------------------------

D3D compatibility

When using glVertexAttribPointer, and only this function (not the other forms), the size​ field can be a number 1-4, but it can also be GL_BGRA.

This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components.

This special mode is intended specifically for compatibility with a certain Direct3D format. Because of that, it can only be used with GL_UNSIGNED_BYTE, GL_INT_2_10_10_10_REV​ and GL_UNSIGNED_INT_2_10_10_10_REV​. Also, because if that, normalized​ must be GL_TRUE as well; you cannot pass non-normalized values.

Note: This mode should only be used if you have data that is formatted in this D3D style and you need to use it in your GL application. Don't bother otherwise.

Here is a visual description:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              X              |              Y              |               Z            |
-----------------------------------------------------------------------------------------------

Notice how X comes second and the Z last. X is equivalent to R and Z is equivalent to B, so it comes in the reverse of BGRA order: ARGB.

Vertex buffer offset and stride

The vertex format information above tells OpenGL how to interpret the data. The format says how big each vertex is in bytes and how to convert it into the values that the attribute in the vertex shader receives.

But OpenGL needs two more pieces of information before it can find the data. It needs a byte offset from the start of the buffer object to the first element in the array. So your arrays don't always have to start at the front of the buffer object. It also needs a stride, which represents how many bytes it is from the start of one element to the start of another.

The offset​​ defines the buffer object offset. Note that it is a parameter of type const void *​ rather than an integer of some kind. This is in part why it's called glVertexAttribPointer, due to old legacy stuff where this was actually a client pointer.

So you will need to cast the integer offset into a pointer. In C, this is done with a simple cast: (void*)(byteOffset)​. In C++, this can be done as such: reinterpret_cast<void*>(byteOffset)​.

The stride​ is used to decide if there should be bytes between vertices. If it is set to 0, then OpenGL will assume that the vertex data is tightly packed. So OpenGL will compute the stride from the given other components. So if you set the size​ to be 3, and type​ to be GL_FLOAT, OpenGL will compute a stride of 12 (4 bytes per float, and 3 floats per attribute).

Interleaved attributes

The main purpose of the stride​ attribute is to allow interleaving between different attributes. This is conceptually the difference between these two C++ definitions:

struct StructOfArrays
{
  GLfloat positions[VERTEX_COUNT * 3];
  GLfloat normals[VERTEX_COUNT * 3];
  GLubyte colors[VERTEX_COUNT * 4];
};
 
StructOfArrays structOfArrays;
 
struct Vertex
{
  GLfloat position[3];
  GLfloat normal[3];
  Glubyte color[4];
};
 
Vertex vertices[VERTEX_COUNT];

structOfArrays​ is a struct that contains several arrays of elements. Each array is tightly packed, but independent of one another. vertices​ is a single array, where each element of the array is an independent vertex.

If we have a buffer object which has had vertices​ uploaded to it, such that baseOffset​ is the byte offset to the start of this data, we can use the stride​ parameter to allow OpenGL to access it:

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(position, Vertex)));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(normal, Vertex)));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(color, Vertex)));

Note that each attribute uses the same stride: the size of the Vertex​ struct. C/C++ requires that the size of this struct be padded where appropriately such that you can get the next element in an array by adding that size in bytes to a pointer (ignoring pointer arithmetic, which will do all of this for you). Thus, the size of the Vertex​ structure is exactly the number of bytes from the start of one element to another, for each attribute.

The macro offsetof​ computes the byte offset of the given field in the given struct. This is added to the baseOffset​, so that each field points to the start of its own data relative to the beginning of the Vertex​ structure.

As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time.

Index buffers

Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object, sometimes called an Index Buffer Object or Element Buffer Object.

This buffer object is associated with the GL_ELEMENT_ARRAY_BUFFER binding target. This buffer object binding point is different from GL_ARRAY_BUFFER; it is stored within the VAO. This binding point is part of the VAO's state, and if no VAO is bound, then you cannot bind a buffer object to this binding target.

When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, all rendering commands of the form gl*Draw*Elements*​ will use the element buffer for indexed rendering. Indices can be unsigned bytes, unsigned shorts, or unsigned ints.

Instanced arrays

Instanced arrays
Core in version 4.5
Core since version 3.3
ARB extension ARB_instanced_arrays

Normally, vertex attribute arrays are indexed based on the index buffer, or when doing array rendering, once per vertex from the start point to the end. However, when doing instanced rendering, it is often useful to have an alternative means of getting per-instance data than accessing it directly in the shader via a Uniform Buffer Object, a Buffer Texture, or some other means.

It is possible to have one or more attribute arrays indexed, not by the index buffer or direct array access, but by the instance count. This is done via this function:

void glVertexAttribDivisor(GLuint index​, GLuint divisor​);

The index​ is the attribute index to set. If divisor​ is zero, then the attribute acts like normal, being indexed by the array or index buffer. If divisor​ is non-zero, then the current instance (as if from gl_InstanceID​) is divided by this divisor, and the result of that is used to access the attribute array.

This is generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method in some respects. Virtually every OpenGL implementation only offers 16 4-vector attributes, some of which will be the actual per-vertex data. So that leaves less for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller.

However, that should be plenty for a quaternion orientation and a position, for a simple transformation. That would even leave one float (the position only needs to be 3D) to provide a fragment shader an index to access an Array Texture.

Separate attribute format

Separate attribute format
Core in version 4.5
Core since version 4.3
Core ARB extension ARB_vertex_attrib_binding

glVertexAttribPointer and its variations are nice, but they unify two separate concepts into one function (from a hardware perspective): the vertex format for an attribute array, and the source data for that array. These concepts can be separated, allowing the user to separately specify the format of a vertex attribute from the source buffer. This also makes it easy to change the buffer binding for multiple attributes, since different attributes can pull from the same buffer location.

This separation is achieved by splitting the state into two pieces: a number of vertex buffer binding points, and a number of vertex format records.

The buffer binding points aggregate the following data:

  • The source buffer object.
  • The base offset for all vertex attributes that pull data from this binding point.
  • The stride for all vertex attributes that pull data from this binding point.
  • The instance divisor, which is used for all vertex attributes that pull data from this binding point.

The vertex format consists of:

  • The size, type and normalization of the vertex attribute data.
  • The buffer binding point it is associated with.
  • A byte offset from the base offset of its associated buffer binding point to where it's vertex data starts.

The functions that set the buffer binding point data are:

void glBindVertexBuffer(GLuint bindingindex​, GLuint buffer​, GLintptr offset​, GLintptr stride​);
void glVertexBindingDivisor(GLuint bindingindex​, GLuint divisor​);

glBindVertexBuffer is kind of like glBindBufferRange, but it is specifically intended for vertex buffer objects. The bindingindex​ is, as the name suggests, not a vertex attribute. It is a binding index, which can range from 0 to GL_MAX_VERTEX_ATTRIB_BINDINGS - 1. This will almost certainly be 16.

buffer​ is the buffer object that is being bound to this binding index. Note that there is no need to bind the buffer to GL_ARRAY_BUFFER; the function takes the buffer object directly. offset​ is a byte offset from the beginning of the buffer to where all of the associated attached buffer object data begins. stride​ is the byte offset from one vertex to the next.

Notice that the stride is uncoupled from the vertex format itself here. Also, stride​ can no longer be 0, since OpenGL doesn't know the format of the data yet, so it can't automatically compute it.

glVertexBindingDivisor is much like glVertexAttribDivisor, except applied to a binding index instead of an attribute index. All vertex attributes associated with this binding index will use the same divisor.

The functions that affect vertex attribute formats are:

void glVertexAttribFormat(GLuint attribindex​, GLint size​, GLenum type​, GLboolean normalized​, GLuint relativeoffset​);
void glVertexAttribIFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);
void glVertexAttribLFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

The glVertexAttribFormat functions work similarly to their glVertexAttribPointer counterparts (it even takes GL_BGRA for the size in the same way as the original). attribindex​ is, as the name suggests, an actual attribute index, from 0 to GL_MAX_VERTEX_ATTRIBS​ - 1. size​, type​, and normalized​ all work as before.

relativeoffset​ is new. Vertex formats are associated with vertex buffer bindings from glBindVertexBuffer. So every vertex format that uses the same vertex buffer binding will use the same buffer object and the same offset. In order to allow interleaving (where different attributes need to offset themselves from the base offset), relativeoffset​ is used. It is effectively added to the buffer binding's offset to get the offset for this attribute.

Note that relativeoffset​ has much more strict limits than the buffer binding's offset​. The limit on relativeoffset​ is queried through GL_MAX_VERTEX_ATTRIB_RELATIVE_OFFSET, and is only guaranteed to be at least 2047 bytes. Also, note that relativeoffset​ is a GLuint​ (32-bits), while offset​ is a GLintptr​, which is the size of the pointer (so 64-bits in a 64-bit build). So obviously the relativeoffset​ is a much more limited quantity.

To associate a vertex attribute with a buffer binding, use this function:

void glVertexAttribBinding(GLuint attribindex​, GLuint bindingindex​);

The attribindex​ will use the buffer, offset, stride, and divisor, from bindingindex​.

Note that you still have to enable attribute arrays; this feature doesn't change that fact. It only changes the need to use glVertexAttribPointer.

This can be bit confusing, but it makes a lot more sense than the glVertexAttribPointer method once you see it. The simplest way is to go back to the Vertices​ example from the interleaving section. We have this struct of vertex data:

struct Vertex
{
  GLfloat position[3];
  GLfloat normal[3];
  Glubyte color[4];
};
 
Vertex vertices[VERTEX_COUNT];

Using glVertexAttribPointer, we bound this data like this:

glBindBuffer(GL_ARRAY_BUFFER, buff);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(position, Vertex)));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(normal, Vertex)));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(color, Vertex)));

Now, here is how we would do it using the new APIs:

glBindVertexBuffer(0, buff, baseOffset, sizeof(sizeof(Vertex)));
 
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, offsetof(position, Vertex));
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(normal, Vertex));
glVertexAttribBinding(1, 0);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(color, Vertex));
glVertexAttribBinding(2, 0);

That's much clearer. The base offset to the beginning of the vertex data is very clear, as is the offset from this base to the start of each attribute. Better yet, if you want to use the same format but move the buffer around, it only takes one function call; namely glBindVertexBuffer with a buffer binding of 0.

Indeed, if lots of vertices use the same format, you can interleave them in the same way and only ever change the source buffer. This separation of buffer/stride/offset from vertex format can be a powerful optimization.

Note again that all of the above state is still VAO state. It is all encapsulated in vertex array objects.

Because all of this modifies how vertex attribute state works, glVertexAttribPointer is redefined in terms of this new division. It is defined as follows

void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
  glVertexAttrib*Format(index, size, type, {normalized,} 0);
  glVertexAttribBinding(index, index);
 
  GLuint buffer;
  glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
  if(buffer == 0)
    glErrorOut(GL_INVALID_OPERATION); //Give an error.
 
  if(stride == 0)
    stride = CalcStride(size, type);
 
  GLintptr offset = reinterpret_cast<GLintptr>(pointer);
  glBindVertexBuffer(index, buffer, offset, stride);
}

Where CalcStride​ does what it sounds like. Note that glVertexAttribPointer does use the same index for the attribute format and the buffer binding. So calling it will overwrite anything you may have set into these bindings.

Similarly, glVertexAttribDivisor is defined as:

void glVertexAttribDivisor(GLuint index​, GLuint divisor​)
{
  glVertexAttribBinding(index, index);
  glVertexBindingDivisor(index, divisor);
}

So again, calling it will overwrite your vertex attribute format binding.

Note that while ARB_vertex_attrib_binding is still a new extension at the time of writing, it is not a hardware-based one. So it should be widely implemented on hardware that is still supported by OpenGL as they get around to it.

Matrix attributes

Attributes in GLSL can be of matrix types. However, our attribute binding functions only bind up to a dimensionality of 4. OpenGL solves this problem by converting matrix GLSL attributes into multiple attribute indices.

If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. The number of attributes a matrix takes up depends on the number of columns of the matrix: a mat2​ matrix will take 2, a mat2x4​ matrix will take 2, while a mat4x2​ will take 4. The size of each attribute is the number of rows of the matrix.

Each bound attribute in the VAO therefore fills in a single column, starting with the left-most and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will naturally take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the first column, 4 is the second, and 5 is the last.

OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.

Double-precision matrices (where available) will take up twice as much space. So a dmat3x3​ will take up 6 attribute indices, two for each column.

Non-array attribute values

A vertex shader can read an attribute that is not currently enabled (via glEnableVertexAttribArray. The value that it gets is defined by special context state, which is *not* part of the VAO.

Because the attribute is defined by context state, it is constant over the course of a single draw call. Each attribute index has a separate value.

The initial value for these is a floating-point (0.0, 0.0, 0.0, 1.0)​. Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).

To change the value, you use a function of this form:

 void glVertexAttrib*(GLuint index​, Type values​);
 void glVertexAttribN*(GLuint index​, Type values​);
 void glVertexAttribP*(GLuint index​, GLenum type​, GLboolean normalized​, Type values​);
 void glVertexAttribI*(GLuint index​, Type values​);
 void glVertexAttribL*(GLuint index​, Type values​);

The * is the type descriptor, using the traditional OpenGL syntax. The index​ is the attribute index to set. The Type​ is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes. And just as for attributes provided by arrays, double-precision inputs (GL 4.1 or ARB_vertex_attrib_64bit) that having more components than provided leaves the extra components with undefined values.

The N​ version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The P​ versions are for packed integer types, and they can be normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.

To provide non-array integral values for integral attributes, use the I​ versions. For double-precision attributes (using the same rules for attribute sizes as double-precision arrays), use L​.

Note that the fixed attribute values are not part of the VAO state; they are context state. Changes to them do not affect the VAO.

Note: It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for. They might be faster than uniforms, or they might not.

Drawing

Once the VAO has been properly set up, the arrays of vertex data can be rendered as a Primitive. OpenGL provides innumerable different options for rendering vertex data.

See Also

Reference