Trouble porting VertexArray setup code to modern OpenGL

I’m trying to port some relatively basic code from OpenGL 3.x to something a little more modern, and am unsure why the Vertex Array setup isn’t functioning.

The following bit is known working, but sets up the problematic part.


GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
static const GLfloat VboData[] = {
    -1.0f, -1.0f, 0.0f,
     1.0f, -1.0f, 0.0f,
     0.0f, 1.0f, 0.0f,
};
GLuint vertexbuffer;
glNamedBufferData(vertexbuffer, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);

Here’s the part that’s tripping me up. When I convert glVertexAttribPointer to glVertexArrayAttribFormat, the triangle defined above stops rendering.


glEnableVertexArrayAttrib(VertexArrayID, 0);
glVertexArrayAttribBinding(VertexArrayID, 0, 0);
glVertexArrayAttribFormat(VertexArrayID, 0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayVertexBuffer(VertexArrayID, 0, vertexbuffer, 0 /* offset */, 0 /* stride */);

In case it matters, my render loop looks like this both before and after the change


glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glUseProgram(programID);
glBindVertexArray(VertexArrayID);
glDrawArrays(GL_TRIANGLES, 0, 3);

SDL_GL_SwapWindow(window.get());

Does anybody have any ideas about what might be going wrong?

As per forum posting guidelines, here is some basic information about my setup.
OS: Windows 8.1, 64 bit
Graphics Card: NVIDIA GeForce GTX 970
Driver Version: 347.88
GL_VENDOR: NVIDIA Corporation
GL_VERSION: 4.5.0 NVIDIA 347.88
GL_RENDERER: GeForce 970/PCIe/SSE2
Shaders: Yes
Toolkits: SDL 2.0.3 + GLEW 1.12.0

glVertexArrayVertexBuffer(VertexArrayID, 0, vertexbuffer, 0 /* offset /, 0 / stride */);

Set the stride to its actual value, not zero.
I know from the unnamend counterpart of this function, glBindVertexBuffer, that it’s indeed necessary to provide a stride != 0.

OK, this is going to be a bit confusing but bear with me. I’ll give you the short version first.

Don’t use glGenVertexArrays (and other glGen* functions) with direct-state-access functions like glVertexArray*. You should use glCreateVertexArrays (and the other glCreate* functions) instead.

The reason is a confusing bit of OpenGL stuff. See, when you call glGen*, what you get is a set of numbers, names for objects. You can then bind those names to the context and modify them. However, those objects that were just “generated” are empty. They don’t merely contain the default state for an object; they contain no state at all. They don’t get their default state until you bind them. Why?

Don’t ask.

Since DSA-style doesn’t bind objects until you’re ready to use them, a problem occurs; when does an object get its default values filled in? EXT_DSA basically said, “all DSA functions must fill in default values.” The ARB decided that this was rather silly, since it required putting in a lot of code into dozens of interface functions. So instead, they decided to add new object creation functions that create both the object’s name and its default state.

You can only call DSA functions on objects that have their default state. Some implementations may support what you’re doing, but they’re not supposed to. NVIDIA in particular is decidedly permissive about things like this.

Also, what Betrayal said :wink: You should have gotten a GL_INVALID_VALUE error from your code.

Thanks, everything is working now :slight_smile:

I’ve also started peppering my code with glGetError to catch any future errors like the one with glVertexArrayVertexBuffer.

Now that this is working, is there anything I can do to get rid of the glBindVertexArray? Something conceptually like glDrawNamedVertexArray would be perfect, but I can’t seem to find that.

Now that this is working, is there anything I can do to get rid of the glBindVertexArray?

Why would you want to? Binding is how you say you’re going to use it. Just like with binding FBOs, textures, or programs (though the non-pipeline version use “Use” rather than “Bind”).

DSA is for modifying; binding is for using.

You can’t get rid of it. binding means you are going to use it at present.
glGen* likes you go to a fruit stack. if you want to eat an apple, you have to take out it(binding).
if you want a pear, you bind a pear.

Darn, okay. I was hoping that some modern extensions had made it possible.

I’m not sure whether this belongs in a separate thread or not, but here it goes.

I’ve since tried updating the render loop, with the intent of using glMultiDrawArraysIndirect. I’ve converted successfully to glDrawArraysInstancedBaseInstance, but once I try to jump to glDrawArraysIndirect or glMultiDrawArraysIndirect, nothing displays again.

Here is my initialization code for the new buffer


typedef struct {
    GLuint count;
    GLuint primCount;
    GLuint first;
    GLuint baseInstance;
} DrawArraysIndirectCommand;
assert(sizeof(DrawArraysIndirectCommand) == 16);

static const DrawArraysIndirectCommand indirectCommand = {
    1, // Draw one copy of this triangle
    0, // Starting vertex index
    3, // Three vertices in total, making one triangle
    0  // Starting instance index
}; // same parameters as glDrawArraysInstancedBaseInstance

GLuint commandBuffer;
glCreateBuffers(1, &commandBuffer);
glNamedBufferData(commandBuffer, sizeof(DrawArraysIndirectCommand), &indirectCommand, GL_STATIC_DRAW);
err_checkGL("Loading Command Buffer"); // my own function that will abort the program if there is an OpenGL error

In my render loop, I now have this.


glBindBuffer(GL_DRAW_INDIRECT_BUFFER, commandBuffer);
glDrawArraysIndirect(GL_TRIANGLES, 0);
err_checkGL("Drawing Triangle via glDrawArraysIndirect");

There are no OpenGL errors reported by glGetError, but I’m not seeing anything? Any ideas?
When the screen goes blank, I’m left guessing. I’m also wondering if there is a more general way to look at what OpenGL is doing here, so I can see what exactly it’s doing that’s contrary to my expectations.

Your initialiser has [var]first[/var] and [var]primCount[/var] swapped, i.e. the number of primitives is 0 and the starting vertex index is 3.

Thanks. You’re correct about that, but something else must be going on, as it’s still not working.

Edit: I got it working by then swapping values for count and primcount.

I don’t mean to hijack your thread, but I too have been having issues porting my older code to modern OpenGL. Below is my snippet of my attempt at Direct State Access, but I just get a blank screen. I’m hoping, since you just went through this yourself, you’d be able to point out what’s wrong. Full code here: pastebin.com/ephfTbCj (Apparently I can’t create URL being new and all…).

First buffer stride should be zero, correct? Followed by the next buffer stride being 3 * sizeof(float)?

Also, I’ve enabled the debug context and it’s clean.

[DEBUG]: Buffer detailed info: Buffer object 1 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations.
[DEBUG]: Buffer detailed info: Buffer object 2 (bound to GL_ELEMENT_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations.
[DEBUG]: Buffer detailed info: Buffer object 3 (bound to GL_ARRAY_BUFFER_ARB, usage hint is GL_STATIC_DRAW) will use VIDEO memory as the source for buffer object operations.

No errors using GDEBugger either.


	///////////////
	// Locations //
	///////////////

	GLint positionLocation = glGetProgramResourceLocation ( program, GL_UNIFORM, "position" );
	assert ( positionLocation != -1 );
	GLint normalLocation = glGetProgramResourceLocation ( program, GL_UNIFORM, "normal" );
	assert ( normalLocation != -1 );

	GLint modelViewMatrixLocation = glGetProgramResourceLocation ( program, GL_UNIFORM, "ModelViewMatrix" );
	assert ( modelViewMatrixLocation != -1 );
	GLint normalMatrixLocation = glGetProgramResourceLocation ( program, GL_UNIFORM, "NormalMatrix" );
	assert ( normalMatrixLocation != -1 );
	GLint projectionMatrixLocation = glGetProgramResourceLocation ( program, GL_UNIFORM, "ProjectionMatrix" );
	assert ( projectionMatrixLocation != -1 );

	////////////////////
	// Vertex Buffers //
	////////////////////

	// Vertex Buffer
	GLuint vertexBuffer;
	glCreateBuffers ( 1, &vertexBuffer );
	glBindBuffer ( GL_ARRAY_BUFFER, vertexBuffer );
	glBufferData ( GL_ARRAY_BUFFER, vertices.size () * sizeof ( GLfloat ), vertices.data (), GL_STATIC_DRAW );

	// Index Buffer
	GLuint indexBuffer;
	glGenBuffers ( 1, &indexBuffer );
	glBindBuffer ( GL_ELEMENT_ARRAY_BUFFER, indexBuffer );
	glBufferData ( GL_ELEMENT_ARRAY_BUFFER, indices.size () * sizeof ( GLuint ), indices.data (), GL_STATIC_DRAW );

	// Normal Buffer
	GLuint normalBuffer;
	glGenBuffers ( 1, &normalBuffer );
	glBindBuffer ( GL_ARRAY_BUFFER, normalBuffer );
	glBufferData ( GL_ARRAY_BUFFER, normals.size () * sizeof ( GLfloat ), normals.data (), GL_STATIC_DRAW );

	//////////////////
	// Vertex Array //
	//////////////////

	GLuint vertexArray;
	glCreateVertexArrays ( 1, &vertexArray );

	glVertexArrayElementBuffer ( vertexArray, indexBuffer );
	
	glVertexArrayAttribFormat ( vertexArray, positionLocation, 3, GL_FLOAT, GL_FALSE, 0 );
	glVertexArrayVertexBuffer ( vertexArray, 0, vertexBuffer, 0, 0 );
	glVertexArrayAttribBinding ( vertexArray, positionLocation, 0 );
	glEnableVertexArrayAttrib ( vertexArray, positionLocation );

	glVertexArrayAttribFormat ( vertexArray, normalLocation, 3, GL_FLOAT, GL_FALSE, 0 );
	glVertexArrayVertexBuffer ( vertexArray, 0, normalBuffer, 0, 3 * sizeof(float));
	glVertexArrayAttribBinding ( vertexArray, normalLocation, 0 );
	glEnableVertexArrayAttrib ( vertexArray, normalLocation );

	glBindVertexArray ( vertexArray );

No. When using separate attribute formats (any form of glVertexArrayVertexBuffer), the buffer binding’s stride should never be 0. You have to calculate the stride yourself. I suppose it’s legal to set the stride to zero, but it would only be relevant if you’re setting the buffer object to 0 (and thus removing that buffer binding).

The stride is the byte-offset from one vertex worth of data to the next. Both of your vertex arrays contain 3 floats per vertex, tightly packed. So the stride should be the size of three floats.

Okay, so it’s not the absolute position in the vertex array it’s simply how far one would have to walk the array to find the next vertex (and all it’s data). In that light, stride is a clever name for it. I’ve set both to 3 * sizeof(float). Still nothing though :confused:. Does the rest look okay?

Here’s something else I noticed:

glVertexArrayVertexBuffer ( vertexArray, 0, vertexBuffer, 0, 0 );
glVertexArrayVertexBuffer ( vertexArray, 0, vertexBuffer, 0, 0 );

...

glVertexArrayVertexBuffer ( vertexArray, 0, normalBuffer, 0, 0);
glVertexArrayAttribBinding ( vertexArray, normalLocation, 0 );

This does not make sense.

It is perfectly reasonable to have the different vertex data coming from the same buffer bindings. This would mean that your vertex arrays are interleaved within the same buffer.

However… that’s not how your data works. You have two separate buffers. So you should have two separate buffer binding indices. Your two glVertexArrayVertexBuffer calls should bind to separate binding points. And thus, your glVertexArrayAttribBinding calls should reference separate binding points.


	glVertexArrayAttribFormat ( vertexArray, positionLocation, 3, GL_FLOAT, GL_FALSE, 0 );
	glVertexArrayVertexBuffer ( vertexArray, 0, vertexBuffer, 0, 3 * sizeof(float) );
	glVertexArrayAttribBinding ( vertexArray, positionLocation, 0 );
	glEnableVertexArrayAttrib ( vertexArray, positionLocation );

	glVertexArrayAttribFormat ( vertexArray, normalLocation, 3, GL_FLOAT, GL_FALSE, 0 );
	glVertexArrayVertexBuffer ( vertexArray, 1, normalBuffer, 0, 3 * sizeof(float) );
	glVertexArrayAttribBinding ( vertexArray, normalLocation, 1 );
	glEnableVertexArrayAttrib ( vertexArray, normalLocation );

I think that’s correct, thank you. Still no cube though… notice anything else?

Wouldn’t that mix up have produced a GL_INVALID_OPERATION glError? I’m surprised I wasn’t seeing that. In fact, a lack of errors is starting to make me think it’s my camera. Just trying to rule stuff out.

Wouldn’t that mix up have produced a GL_INVALID_OPERATION glError?

No. As I said, it’s perfectly legal to have two attributes come from the same buffer binding (indeed, it’s recommended for performance). And it’s perfectly legal to put one buffer binding in the VAO, then replace it with another. You’re just overwriting data.

What you effectively did was to use the same data for both positions and normals. This is not, from OpenGL’s point of view, illegal. Not particularly useful, to be sure. But there’s no reason for OpenGL to forbid it.