Buffer Object Streaming

From OpenGL.org
Jump to: navigation, search

Buffer Object Streaming is the process of updating buffer objects frequently with new data while using those buffers. Streaming works like this. You make modifications to a buffer object, then you perform an OpenGL operation that reads from the buffer. Then, after having called that OpenGL operation, you modify the buffer object with new data. Following this, you perform another OpenGL operation to read from the buffer.

Streaming is a modify/use cycle. There may be a swap buffers (or equivalent frame changing process) between one modify/use cycle and another, but not necessarily.

The problem

OpenGL puts in place all the guarantees to make this process work, but making it work fast is the real problem. The biggest danger in streaming, the one that causes the most problems, is implicit synchronization.

The OpenGL specification permits an implementation to delay the execution of drawing commands. This allows you to draw a lot of stuff, and then let OpenGL handle things on its own time. Because of this, it is entirely possible that, well after you call whatever operation that uses the buffer object, you might start trying to upload new data to that buffer. If this happens, the OpenGL specification requires that the thread halt until all drawing commands that could be affected by your update of the buffer object complete.

This implicit synchronization is the primary enemy when streaming vertex data.

There are a number of strategies to solve this problem. Some implementations work better with certain ones than others. Each one has its benefits and drawbacks.

Solutions

STREAM buffer hint

The very first thing you should do is make sure that STREAM is in your buffer's hint.

Client-side multi-buffering

This solution is fairly simple. You simply create two or more buffer objects of the same length. While you are using one buffer object, you can be modifying another. Depending on how much parallelism your implementation can provide, you may need more than two buffers to make this work.

The principle drawback to this solution is that it requires using a number of different buffer objects (separate buffer handles). If you are using this for uploading vertex data, you will therefore need more VAOs.

Server-side multi-buffering (buffer re-specification/orphaning)

This solution is to reallocate the buffer object before you start modifying it. This is termed buffer "orphaning". There are two ways to do it.

The first way is to call glBufferData with a NULL​ pointer, and the exact same size and usage hints it had before. This allows the implementation to simply reallocate storage for that buffer object under-the-hood. Since allocating storage is (likely) faster than the implicit synchronization, you gain significant performance advantages over synchronization. And since you passed NULL, if there wasn't a need for synchronization to begin with, this can be reduced to a no-op. The old storage will still be used by the OpenGL commands that have been sent previously. If you continue to use the same size over-and-over, it is likely that the GL driver will not be doing any allocation at all, but will just be pulling an old free block off the unused buffer queue and use it (though of course this isn't guaranteed), so it is likely to be very efficient.

You can do the same thing when using glMapBufferRange with the GL_MAP_INVALIDATE_BUFFER_BIT​. You can also use glInvalidateBufferData, where available.

All of these give the GL implementation the freedom to orphan the previous storage and allocate a new one. Which is why this is called "orphaning".

Whenever you see either of these, think of it as a directive to OpenGL to 1) detach the old block of storage and 2) give you a new block of storage to work with, all behind the same buffer handle. The old block of storage will be put on a free list by OpenGL and reused once there can be no draw commands in the queue which might be referring to it (e.g. once all queued GL commands have finished executing).

Obviously, these methods detach the buffer storage from the client-accessible workspace, so they are only practical if there is no further need to read or update this specific block of storage from the GL client side. Unless you plan to use buffer updates in combination with this technique, then it is best if updates are done on a whole buffer rather than parts of a buffer, and if you overwrite all of the data in that buffer each time.

One issue with this method is that it is implementation dependent. Just because an implementation has the freedom to do something does not mean that it will.

Unsynchronized buffer mapping

This is a form of streaming that you need to be very careful with. It is often used in combination with buffer re-specification to increase submission performance.

To implement buffer update, we call glMapBufferRange with the GL_MAP_UNSYNCHRONIZED_BIT​. This tells OpenGL not to do any implicit synchronization at all. When you see this, think "OpenGL, please give me a buffer 'fast'. It's fine if you give me the same one for this buffer object that you did last time. I promise not to modify any portion of this buffer that might be in use by a GL command I've already submitted. Just trust me."

Though there is no synchronization, this does not mean that synchronization is unimportant. Indeed, you will get undefined results if you are modifying parts of the buffer that already-queued GL commands (such as draw commands) will read from on the GPU. Don't do that.

The basic use case for using buffer updates is that you can progressively fill up a buffer object with Map UNSYNCHRONIZED, write, unmap, issue GL command using that buffer subregion, rinse/repeat. And so long as your writes never overlap, then you're safe and you don't need to think about "messing up the GPU's data" until you fill up that buffer. Once you fill it up, you can do one of two things to continue to avoid stomping on the GPU's buffer data: 1) orphan, or 2) synchronize. Orphan being the preferred method as avoiding synchronization usually yields higher performance (as synchronization often involves waiting).

To orphan, just use the buffer re-specification technique (glBufferData(NULL), glMapBufferRangeGL_MAP_INVALIDATE_BUFFER_BIT, or glInvalidateBufferData). You then get a fresh block of storage underneath the buffer handle to scribble on that no other GL commands can be referring to, so no synchronize is needed.

Alternatively, to synchronize, use a sync object. If you put a fence after all of the commands that read from a buffer, you can check whether this fence has completed before mapping the buffer. If it has not, then you can wait to update the buffer, performing some other important task in the meantime. You can also use the fence to force synchronization if you have no other tasks to perform. Once the fence has completed, you can map the buffer freely, using the GL_MAP_UNSYNCHRONIZED_BIT just in case the implementation isn't aware that the buffer can be updated.

For more details on buffer streaming in general, see this thread. Pay particular attention to the posts by Rob Barris.

Buffer invalidation flags

glMapBufferRange has another flag you should know about: GL_MAP_INVALIDATE_RANGE_BIT​. This is different from GL_MAP_INVALIDATE_BUFFER_BIT​, which you've already been introduced to above.

According to Rob Barris, MAP_INVALIDATE_RANGE_BIT​ in combination with the WRITE​ bit (but not the READ​ bit) basically says to the driver that it doesn't need to contain any valid buffer data, and that you promise to write the entire range you map. This lets the driver give you a pointer to scratch memory that hasn't been initialized. For instance, driver allocated write-through uncached memory. See this post for more details.