PDA

View Full Version : Discard vbo as soon as possible? Good or bad?



doug65536
10-01-2011, 03:08 PM
I have a system where I have a VBO for an element buffer. As I determine each subobject visible, I write out the element indices to a mapped VBO. When the element buffer is full (based on GL_MAX_ELEMENT_INDICES) or I have put all the visible subobject indices into the buffer for this frame, I unmap the element buffer and do the glDrawElements.

I realized that my element buffer won't be used anymore for the rest of this frame's render. It will (probably) be taking up video memory that would be better used for something more useful.

Is it a good or bad idea to "discard" the element buffer as soon as possible? Technically the buffer content will be "in use" until the render queue gets to my glDrawElements call.

Is it possible to tell OpenGL that I don't care about the content of the buffer anymore while the content is still being "referenced" by a queued glDrawElements call?

If it is possible, is it advisable?

Thanks!

V-man
10-01-2011, 07:12 PM
It is not a good idea to allocate and deallocate things while rendering. If you want to reallocate, call glBufferData with a null pointer.


It will (probably) be taking up video memory that would be better used for something more useful.
Why is this a problem? http://www.opengl.org/wiki/FAQ#Memory_Management

Dark Photon
10-01-2011, 07:16 PM
I think I know what you're asking. First, if you're using your VBO as a ring buffer to send data to to the GPU when you get to an "it's full situation" you want to make sure that the way you tell the driver to "discard" it pipelines well, so that you don't have to have the CPU wait on what the GPU is doing. This technique (which discards the buffer contents but pipelines just like buffer upload commands) is called "buffer orphaning".

There are several ways to do this, including calling glBufferData again with a NULL pointer, as well as calling glMapBufferRange with a MAP_INVALIDATE_BUFFER_BIT bit. This will pipeline nicely. It acts as sort of a "gimme a fresh buffer" command so that you can start anew filling the buffer with more batch data before the batches you've dispatched from the buffer prior to the orphan have finished. The subtle (and key) point here is that behind each GL buffer object handle, the GL driver typically will have "multiple" memory blocks in-flight at a time, to maximize throughput. However, that this is going on is fairly transparent to you.

Here are a few good links to read about this:

* http://www.opengl.org/wiki/Buffer_Object_Streaming
* http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=273141#Post2731 41 (Focus on Rob Barris' posts here)

doug65536
10-02-2011, 03:04 AM
Thanks for the responses! Those streaming pages are exactly what I was looking for.

I am doing the buffer orphan before the mapbuffer already.

Now that I have some terminology, I can ask my question more precisely.

Most faqs/docs/etc that talk about streaming seem to say to do the buffer orphaning *before* they map the buffer. Doing that causes the buffer to consume memory (system and/or video memory) even after the render call has completed. For a large complex scene, it would add unnecessary memory pressure.

My precise question is, wouldn't it be better to immediately orphan the buffer *after* the draw call with a zero size so the driver can forget about the contents as soon as it has completed the render call that needs the buffer content?

Doing the orphan with a zero size after the draw call discards the memory as soon as possible, but costs an extra glBufferData to size the buffer the next time around.

I map my buffer as write only and I buffer the data as stream draw. Does that already give the driver freedom to forget about the content when I unbind the buffer?

Thanks!

mhagain
10-02-2011, 05:26 AM
That sounds highly inadvisable. The driver will look after swapping resources out of video memory for you, and will likely do a better job of it than your own scheme. The overhead of resizing the buffer down to 0 (which may cause some drivers to explode) then back up again is non-trivial, would most likely cause extra CPU/GPU synchronization, and would almost definitely more than wipe out any theoretical gains from memory saving.