PDA

View Full Version : Orphaning Vertex Buffer Object Not Working



cmcrook
11-11-2017, 08:40 PM
I am rendering text which is updated every frame. The text is a series of quads with texture coordinates that correspond to each character in a texture atlas. I need to orphan the vertex buffer object (VBO) every frame before sending the updated vertex and texture data to it. However, using glInvalidateBufferData() nor glBufferData() with a null pointer and the same size as the previous data doesn't works. Theses are the calls I make every frame:



glBindVertexArray(VAO);
// glInvalidateBufferData(VBO);
// glInvalidateBufferData(TBO);

glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(GLfloat), nullptr, GL_STREAM_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

glBindBuffer(GL_ARRAY_BUFFER, TBO);
glBufferData(GL_ARRAY_BUFFER, texCoords.size() * sizeof(GLfloat), nullptr, GL_STREAM_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

vertices.clear();
texCoords.clear();
genTextMesh();

glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(GLfloat), &vertices.front(), GL_STREAM_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);

glBindBuffer(GL_ARRAY_BUFFER, TBO);
glBufferData(GL_ARRAY_BUFFER, texCoords.size() * sizeof(GLfloat), &texCoords.front(), GL_STREAM_DRAW);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(1);

glBindVertexArray(0);


What am I doing wrong? I am following the information from the wiki https://www.khronos.org/opengl/wiki/Buffer_Object_Streaming.

Alfonse Reinheart
11-11-2017, 10:36 PM
Perhaps you misunderstand the idea of invalidation.

See, when the wiki talks about using `glBufferData` with NULL, it's done on the assumption that you will be uploading the actual data at a later time with `glBufferSubData`. By calling `glBufferData` to upload new data, you're also allocating new memory for that data. In effect, you're telling the driver to reallocate the storage again, even though you told it to reallocate the storage already.

That isn't a recipe for performance.

The key thing about invalidation is that the size of the buffer should remain the same. That's another reason why you shouldn't use `glBufferData` to upload the contents of the buffer; that makes it possible to change the size of the buffer.

That being said:


However, using glInvalidateBufferData() nor glBufferData() with a null pointer and the same size as the previous data works.

Perhaps you're missing a negative there. Because my read of that is that what you've done seems to work. Which it should. The problem is that its inefficient, since you're reallocating twice.

cmcrook
11-11-2017, 10:50 PM
That clears up a lot and that was typo. I meant to say it doesn't work. I edited the original post to correct that. However, I need to change the size of the buffer depending on whether the string I am rendering gets larger or smaller. What is the best way to go about this then?

Alfonse Reinheart
11-12-2017, 11:32 AM
What is the best way to go about this then?

Don't. Have a maximum string size. Or better yet, have a maximum byte size, then use however much of that you need for the particular string you're using.

After all, you don't want to have only one string in this buffer object, right?

Dark Photon
11-12-2017, 12:39 PM
See, when the wiki talks about using `glBufferData` with NULL, it's done on the assumption that you will be uploading the actual data at a later time with `glBufferSubData`.

Yes, or glMapBufferRange (which is why you can orphan with the GL_MAP_INVALIDATE_BUFFER_BIT).