Difference between revisions of "Talk:Buffer Object Streaming"

From OpenGL.org
Jump to: navigation, search
(Replaced content with 'Nothing ~~~~')
Line 1: Line 1:
Stuff moved from the Buffer Objects page:
Nothing [[User:Alfonse|Alfonse]] 20:38, 12 October 2009 (UTC)
Buffer objects provide a number of possible usage patterns for streaming. Exactly which will work best depends on the particulars of the hardware.
Tf you're streaming data, STREAM is going to need to be in your usage. And since we're talking about updating from the user side, you should be using STREAM_DRAW.
There is a parallelism problem that can occur when streaming data. The OpenGL specification permits an implementation to delay the execution of drawing commands. This allows you to draw a lot of stuff, and then let OpenGL handle things on its own time. Because of this, it is entirely possible that, well after you called the rendering function with a buffer object, you might start trying to stream vertex data into that buffer. If this happens, the OpenGL specification requires that the thread halt until all drawing commands that could be affected by your update of the buffer object complete. This obviously misses the whole point of streaming.
This is going to be your main source of woe.
There is one tried-and-true method of avoiding this: manual double-buffering. That is, allocate two buffer objects of the same size. Fill one up and render with it, then switch to the other one when you need to stream some new vertices in.
This is nice, and it gets around the above issue. But it has problems. Namely, that it takes up 2x the memory. Also, the STREAM hint is designed to deal with precisely this issue, so it is entirely possible that the implementation may double-buffer for you.
Instead, you can try a variety of techniques to force the implementation to do what you need.
<code>glMapBufferRange</code> with the GL_MAP_INVALIDATE_BUFFER_BIT set is one way to do it. Invalidating the buffer tells OpenGL that the entire buffer's contents will not be needed. This gives OpenGL the opertunity to orphan the buffer and allocate a new one. It also conveniently maps the buffer, so if you need to map the buffer to upload your data, there you are.
If you call <code>glBufferData</code> with a NULL data pointer and the same usage hints and size, the OpenGL implementation can take this as a sign that you no longer care about the current contents of the buffer. Again, this allows OpenGL to orphan the buffer and allocate a new one.
Both of these can give the effect of double-buffering.
The deepest of the deep magic comes in <code>glMapBufferRange</code> with GL_MAP_UNSYNCHRONIZED_BIT. This guarantees that you will never halt due to the buffer being in use. Unfortunately, it also means that you can get a race condition, where you are updating a buffer object while it is being read from. The unsyncrhonized flag will prevent OpenGL from trying to stop this, but it won't prevent OpenGL from rendering wrong stuff when it does happen.
To prevent it on your end, you can use [[Sync Objects]] (core in version 3.2). These allow you to ''ask'' whether a particular rendering command has finished by putting a fence after that command. Thus, if it has finished, you can do the streaming. If it hasn't, you can choose to do something else. That way, your thread isn't stopped.

Latest revision as of 16:38, 12 October 2009

Nothing Alfonse 20:38, 12 October 2009 (UTC)