Questions about and glVertexAttribPointer and the fixed function pipeline

Firstly, about the programmable pipeline:

Using the OpenGL 3.3 Core specification, when I call glVertexAttribPointer with a pointer of 0 (or nullptr) and no VBO bound, I never get a GL_INVALID_OPERATION error. No errors, what?!

The specification (https://www.khronos.org/registry/OpenGL/specs/gl/glspec33.core.pdf) says on page 344: Calling VertexAttribPointer when no buffer object or no vertex array object is bound will generate an INVALID_OPERATION error, as will calling any array drawing command when no vertex array object is bound.

Q1: Does this mean that the driver has this function implemented incorrectly? If not then what is the use of passing no VBO and 0 to glVertexAttribPointer?


glBindBuffer(GL_ARRAY_BUFFER, 0);

glVertexAttribPointer(index, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (void *) 0);

Secondly, about the fixed function pipeline:

Q2: In OpenGL 3.1 or a compatibility context, how does glDrawArrays know whether to use immediate mode (glEnableClientState and glVertexPointer) vs modern glVertexAttribPointer calls? Is it if a shader is bound, it uses modern and if no shader is bound it uses immediate? What happens if I mix together immediate mode, then move to modern halfway through?
Q3: Following up: is it possible to use any modern OpenGL with the old immediate mode together? For example, send vertex data via glVertex3f then process it through a programmable shader? (Guessing not, but wanted to ask)
Q4: And finally: if the fixed function pipeline didn’t have shaders, does this mean that many complex graphics effects just weren’t possible before shaders or had to be done on the CPU? Was very old hardware specifically designed for fixed function only?

Thanks, much appreciated if anyone can answer one or all questions! I’m not using immediate mode, but wanted to know how it worked.

The specification of glVertexAttribPointer() in §2.8 says

An INVALID_OPERATION error is generated under any of the following conditions:

  • any of the *Pointer commands specifying the location and organization of vertex array data are called while zero is bound to the ARRAY_BUFFER buffer object binding point (see section 2.9.6), and the pointer argument is not NULL.

AFAICT, the only point is to provide a way to reset the state to its initial value (where the buffer is zero and the pointer is null).

“Immediate mode” means glBegin/glEnd.

If you use glDraw*, then it uses all enabled arrays, both for fixed-function attributes (position, colour, normal, texture coordinates, fog coordinates) and generic attributes. Note that the position attribute (glVertexPointer, glVertex) is mapped to generic attribute zero (in particular, calling glVertexAttrib*(0,…) inside glBegin/glEnd emits a new vertex).

Yes. Fixed-function attributes are available in the vertex shader via compatibility input variables (gl_Position, gl_Color, etc). Fixed-function state is available in any shader via compatibility uniform variables. These are documented in the GLSL specification.

Yes.

Yes. The Geforce 3 (ca 2001) was the first GPU to support programmable shaders.

Thanks for all the answers! They really help.

I understand this but that is more for old times when glVertexAttribPointer could be used with a regular pointer to main memory. I’m specifically wondering why what is written in the spec isn’t what happens in reality:

Client vertex and index arrays - all vertex array attribute and element array index pointers must refer to buffer objects. The default vertex array object (the name zero) is also deprecated. Calling VertexAttribPointer when no buffer object or no vertex array object is bound will generate an INVALID_OPERATION error, as will calling any array drawing command when no vertex array object is bound.

This is written in the core specification and is not what happens in my code… which is why I’m confused. If they wrote that in the spec, surely it should be implemented similarly?

To satisfy the full specification, surely the driver should have thrown an invalid operation with no buffer being bound…?

Also, side note, isn’t gl_Position the output and gl_Vertex the input?

There are two descriptions in the specification: one in §2.8 and one in §E.2.2. The latter omits the “and the pointer argument is not NULL” part. The former is normative; the latter simply attempts to summarise the change.

No. The actual specification of glVertexAttribPointer() is explicit that it only raises GL_INVALID_OPERATION when no buffer is bound and the pointer argument is not NULL.

That’s correct. gl_Vertex is the compatibility attribute.

Yes, and that’s only going to be true if you’re using an OpenGL Core profile, not an OpenGL Compatibility profile. So if you want that behavior, make sure you’re creating a GL context in the Core profile.

When it says “are sourced from buffers if the array’s buffer binding is non-zero” (Compat 3.3 2.9.6, pg 62 here) does buffer binding refer to GL_ARRAY_BUFFER or to the currently bound VAO?

I just created a Compatibility profile and:

Compat (pointer = null):
VAO: NOT BOUND & VBO: NOT BOUND RESULT: NO ERROR
VAO: BOUND & VBO: NOT BOUND RESULT: NO ERROR
VAO: NOT BOUND & VBO: BOUND RESULT: NO ERROR
VAO: BOUND & VBO: BOUND RESULT: NO ERROR

Compat (pointer = to a client mem address, not offset):
VAO: NOT BOUND & VBO: NOT BOUND RESULT: NO ERROR
VAO: BOUND & VBO: NOT BOUND RESULT: INVALID_OPERATION
VAO: NOT BOUND & VBO: BOUND RESULT: NO ERROR
VAO: BOUND & VBO: BOUND RESULT: NO ERROR

Core (pointer = null):
VAO: NOT BOUND & VBO: NOT BOUND RESULT: INVALID_OPERATION
VAO: BOUND & VBO: NOT BOUND RESULT: NO ERROR
VAO: NOT BOUND & VBO: BOUND RESULT: INVALID_OPERATION
VAO: BOUND & VBO: BOUND RESULT: NO ERROR

Core (pointer = to client mem address, not offset):
VAO: NOT BOUND & VBO: NOT BOUND RESULT: INVALID_OPERATION
VAO: BOUND & VBO: NOT BOUND RESULT: INVALID_OPERATION
VAO: NOT BOUND & VBO: BOUND RESULT: INVALID_OPERATION
VAO: BOUND & VBO: BOUND RESULT: NO ERROR

For Core, this makes perfect sense. You need a VAO bound always, and you need a VBO bound if pointer is not null.
For Compatibility, it allows everything that core allows, plus it also has no error when a VAO is not bound.

When a VAO is not bound, am I correct in saying that the pointer argument is interpreted as a client memory address? I couldn’t find this in the spec.

How do you think we used buffer objects before VAOs?

In the compatibility profile, VAO 0 is merely the default vertex array object. It cannot be deleted, but in every other way, it acts like a proper VAO. In the core profile, VAO 0 is not a valid VAO, so attempting to modify it (by binding 0 and calling VAO-modifying functions) will result in an error.

Ah, I think that makes sense! Just got confused as it said VBOs were added in 1.5 and VAOs in 3.0 and I can’t say I like how OpenGL reused the same functions for multiple purposes.

From how I understand it now:

  1. The no array buffer was bound for the attribute (zero) then it sources from client memory (for compatibility), if it’s not zero then it sources from the buffer as an offset (modern).
  2. VAOs are always bound regardless of compatibility or not, just a default VAO is created in compatibility for you (but a VAO must be bound initially in modern).
  3. Any attributes that are disabled but used in the shaders are sourced from the client’s state and can be set (almost as a default/uniform value) by glVertexAttrib* commands.
  4. There are many ways to send attributes to shaders in compatibility e.g: glBegin, glVertex3f, glEnd or sourced from client memory or sourced from a server-side buffer object.

Feel free to correct any of these if wrong but that’s the gist that I’ve understood from all the replies.

Need to play around with some compatibility stuff to really get a grip on it.

Really appreciate all the replies, and my only other question is: what is the point of keeping VertexAttrib* functions around as opposed to just sending in Uniforms? They are both constants, non-changing per vertex.

Attributes can change per vertex. So if something is an attribute (vertex shader input variable), you can provide either a value for each vertex with glVertexAttribPointer() or a single value for all vertices with glVertexAttrib().

Also, it makes it easier to retro-fit shaders to legacy code which uses glBegin/glEnd.

I can’t say I like how OpenGL reused the same functions for multiple purposes.

Nobody’s forcing you to use them.

what is the point of keeping VertexAttrib* functions around as opposed to just sending in Uniforms?

Stuff in OpenGL is like candy; it doesn’t have to have a point.

What is the state of a VS input linked to a disabled array? The answer to that will either be an error at draw time (yet one more), undefined behavior (we’ve got plenty of that too), or a default value. And by and large, OpenGL’s design has preferred defaults to errors and UB (see default textures and so forth).

So if implementations already have to allow some kind of default value anyway, allowing users to set those defaults does no real harm. Well, save the fact that, by having the feature around, it suggests that it would be a good idea to use it. And it isn’t, but that’s a bridge we already crossed when we allowed defaults in the first place.

Re: glVertexAttrib calls, there may or may not also be some historical inertia from the time the ARB were saying “Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls”.

More seriously, they serve different use cases. Uniforms are expected to stay the same for all invocations of a given shader; attribs may change per-vertex; and the driver may optimize storage differently for each. In the GLSL model uniforms are part of per-program state whereas attribs are global state. In the old days (and possible on some modern mobile hardware) changing a uniform meant recompiling a shader. There are fewer attrib slots available for use than there are uniform slots.

The upshot of all of this is that you get to choose the most appropriate type depending on your own use case, and just because you may not have a use case that suits having both available as options, it doesn’t mean that nobody else does.

[QUOTE=mhagain;1290300]Re: glVertexAttrib calls, there may or may not also be some historical inertia from the time the ARB were saying “Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls”.

More seriously, they serve different use cases. Uniforms are expected to stay the same for all invocations of a given shader; attribs may change per-vertex; and the driver may optimize storage differently for each. In the GLSL model uniforms are part of per-program state whereas attribs are global state. In the old days (and possible on some modern mobile hardware) changing a uniform meant recompiling a shader. There are fewer attrib slots available for use than there are uniform slots.

The upshot of all of this is that you get to choose the most appropriate type depending on your own use case, and just because you may not have a use case that suits having both available as options, it doesn’t mean that nobody else does.[/QUOTE]

Of course, all of that assumes that the underlying hardware mirrors OpenGL’s interface. If it doesn’t, if it’s set up so that attributes are always arrayed, then the OpenGL implementation effectively has to recompile the shader to turn the non-arrayed attribute into a uniform, set the uniform’s value from global state, and then render with this slightly modified program. And it has to do this every time you use that uniform-throug-attribute thing.

I’ve trivially emulated standalone glVertexAttrib calls in D3D by setting a stride of 0, which in D3D means “actually 0” rather than “tightly packed”. There was no technical or other requirement for this; I just did it (1) for the sheer hell of it, and (2) to see if it could be done.

I only mention this because it’s another approach that hardware might use behind-the-scenes (and avoids the uniform round-tripping thing you mention).

Well, if you have instancing, you can get the same effect by using glVertexAttribPointer() with a single-element array and glVertexAttribDivisor() with a sufficiently large divisor.

That’s another sensible way of doing it yeah. So for a given implementation, if the option is between a totally brain-dead way that makes it impossible to reliably answer several glGets vs a sensible way, which of those would a betting man choose?

glGet() is for retrieving implementation details at initialisation. Anything else should be tracked by the client. Or maybe treating indirect rendering as the default has become too ingrained :wink:

Beyond that, glVertexAttrib() still makes sense if you can’t rely upon 3.3+ (instancing) or need to retrofit shaders to immediate-mode code.